tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
other,23-12-J05-1003,bq |
should be applicable to many other
<term>
|
NLP problems
|
</term>
which are naturally framed as
<term>
|
#8959
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,8-10-J05-1003,bq |
significant efficiency gains for the new
<term>
|
algorithm
|
</term>
over the obvious
<term>
implementation
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |
other,7-2-J05-1003,bq |
<term>
parser
</term>
produces a set of
<term>
|
candidate parses
|
</term>
for each input
<term>
sentence
</term>
|
#8670
The base parser produces a set ofcandidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
tech,7-5-J05-1003,bq |
introduce a new
<term>
method
</term>
for the
<term>
|
reranking task
|
</term>
, based on the
<term>
boosting approach
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
other,16-2-J05-1003,bq |
<term>
sentence
</term>
, with associated
<term>
|
probabilities
|
</term>
that define an initial
<term>
ranking
|
#8679
The base parser produces a set of candidate parses for each input sentence, with associatedprobabilities that define an initial ranking of these parses. |
other,16-9-J05-1003,bq |
</term>
which takes advantage of the
<term>
|
sparsity of the feature space
|
</term>
in the
<term>
parsing data
</term>
.
|
#8877
The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data. |
tech,1-7-J05-1003,bq |
Street Journal treebank
</term>
. The
<term>
|
method
|
</term>
combined the
<term>
log-likelihood
</term>
|
#8800
Themethod combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |