tech,13-5-J05-1003,bq |
reranking task
</term>
, based on the
<term>
|
boosting approach
|
</term>
to
<term>
ranking problems
</term>
described
|
#8772
We introduce a new method for the reranking task, based on theboosting approach to ranking problems described in Freund et al. (1998). |
other,16-5-J05-1003,bq |
the
<term>
boosting approach
</term>
to
<term>
|
ranking problems
|
</term>
described in
<term>
Freund et al. (
|
#8775
We introduce a new method for the reranking task, based on the boosting approach toranking problems described in Freund et al. (1998). |
other,20-5-J05-1003,bq |
ranking problems
</term>
described in
<term>
|
Freund et al. ( 1998 )
|
</term>
. We apply the
<term>
boosting method
|
#8779
We introduce a new method for the reranking task, based on the boosting approach to ranking problems described inFreund et al. ( 1998 ). |
tech,3-6-J05-1003,bq |
al. ( 1998 )
</term>
. We apply the
<term>
|
boosting method
|
</term>
to
<term>
parsing
</term>
the
<term>
Wall
|
#8789
We apply theboosting method to parsing the Wall Street Journal treebank. |
tech,6-6-J05-1003,bq |
the
<term>
boosting method
</term>
to
<term>
|
parsing
|
</term>
the
<term>
Wall Street Journal treebank
|
#8792
We apply the boosting method toparsing the Wall Street Journal treebank. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
tech,1-7-J05-1003,bq |
Street Journal treebank
</term>
. The
<term>
|
method
|
</term>
combined the
<term>
log-likelihood
</term>
|
#8800
Themethod combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
other,4-7-J05-1003,bq |
The
<term>
method
</term>
combined the
<term>
|
log-likelihood
|
</term>
under a
<term>
baseline model
</term>
|
#8803
The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
model,7-7-J05-1003,bq |
<term>
log-likelihood
</term>
under a
<term>
|
baseline model
|
</term>
( that of
<term>
Collins [ 1999 ]
</term>
|
#8806
The method combined the log-likelihood under abaseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
other,12-7-J05-1003,bq |
<term>
baseline model
</term>
( that of
<term>
|
Collins [ 1999 ]
|
</term>
) with evidence from an additional
|
#8811
The method combined the log-likelihood under a baseline model (that ofCollins [ 1999 ]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
other,23-7-J05-1003,bq |
evidence from an additional 500,000
<term>
|
features
|
</term>
over
<term>
parse trees
</term>
that
|
#8822
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model. |
other,25-7-J05-1003,bq |
additional 500,000
<term>
features
</term>
over
<term>
|
parse trees
|
</term>
that were not included in the original
|
#8824
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model. |
model,34-7-J05-1003,bq |
were not included in the original
<term>
|
model
|
</term>
. The new
<term>
model
</term>
achieved
|
#8833
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel. |
tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
measure(ment),6-8-J05-1003,bq |
<term>
model
</term>
achieved 89.75 %
<term>
|
F-measure
|
</term>
, a 13 % relative decrease in
<term>
|
#8841
The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
measure(ment),18-8-J05-1003,bq |
<term>
F-measure
</term>
error over the
<term>
|
baseline model ’s score
|
</term>
of 88.2 % . The article also introduces
|
#8853
The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over thebaseline model ’s score of 88.2%. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
tech,9-9-J05-1003,bq |
a new
<term>
algorithm
</term>
for the
<term>
|
boosting approach
|
</term>
which takes advantage of the
<term>
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
other,16-9-J05-1003,bq |
</term>
which takes advantage of the
<term>
|
sparsity of the feature space
|
</term>
in the
<term>
parsing data
</term>
.
|
#8877
The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data. |