other,23-9-J05-1003,bq |
of the feature space
</term>
in the
<term>
|
parsing data
|
</term>
. Experiments show significant efficiency
|
#8884
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data. |
other,16-9-J05-1003,bq |
</term>
which takes advantage of the
<term>
|
sparsity of the feature space
|
</term>
in the
<term>
parsing data
</term>
.
|
#8877
The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data. |
other,17-3-J05-1003,bq |
additional
<term>
features
</term>
of the
<term>
|
tree
|
</term>
as evidence . The strength of our
|
#8706
A second model then attempts to improve upon this initial ranking, using additional features of thetree as evidence. |
tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
tech,13-5-J05-1003,bq |
reranking task
</term>
, based on the
<term>
|
boosting approach
|
</term>
to
<term>
ranking problems
</term>
described
|
#8772
We introduce a new method for the reranking task, based on theboosting approach to ranking problems described in Freund et al. (1998). |
measure(ment),18-8-J05-1003,bq |
<term>
F-measure
</term>
error over the
<term>
|
baseline model ’s score
|
</term>
of 88.2 % . The article also introduces
|
#8853
The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over thebaseline model ’s score of 88.2%. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,24-2-J05-1003,bq |
initial
<term>
ranking
</term>
of these
<term>
|
parses
|
</term>
. A second
<term>
model
</term>
then
|
#8687
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses. |
other,45-4-J05-1003,bq |
generative model
</term>
which takes these
<term>
|
features
|
</term>
into account . We introduce a new
|
#8755
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account. |
other,16-5-J05-1003,bq |
the
<term>
boosting approach
</term>
to
<term>
|
ranking problems
|
</term>
described in
<term>
Freund et al. (
|
#8775
We introduce a new method for the reranking task, based on the boosting approach toranking problems described in Freund et al. (1998). |
tech,6-6-J05-1003,bq |
the
<term>
boosting method
</term>
to
<term>
|
parsing
|
</term>
the
<term>
Wall Street Journal treebank
|
#8792
We apply the boosting method toparsing the Wall Street Journal treebank. |
tech,25-11-J05-1003,bq |
feature selection methods
</term>
within
<term>
|
log-linear ( maximum-entropy ) models
|
</term>
. Although the experiments in this
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |