other,16-2-J05-1003,bq |
<term>
sentence
</term>
, with associated
<term>
|
probabilities
|
</term>
that define an initial
<term>
ranking
|
#8679
The base parser produces a set of candidate parses for each input sentence, with associatedprobabilities that define an initial ranking of these parses. |
other,23-7-J05-1003,bq |
evidence from an additional 500,000
<term>
|
features
|
</term>
over
<term>
parse trees
</term>
that
|
#8822
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model. |
tech,4-4-J05-1003,bq |
</term>
as evidence . The strength of our
<term>
|
approach
|
</term>
is that it allows a
<term>
tree
</term>
|
#8714
The strength of ourapproach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,37-4-J05-1003,bq |
overlap and without the need to define a
<term>
|
derivation
|
</term>
or a
<term>
generative model
</term>
|
#8747
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define aderivation or a generative model which takes these features into account. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
other,45-4-J05-1003,bq |
generative model
</term>
which takes these
<term>
|
features
|
</term>
into account . We introduce a new
|
#8755
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account. |
tech,9-9-J05-1003,bq |
a new
<term>
algorithm
</term>
for the
<term>
|
boosting approach
|
</term>
which takes advantage of the
<term>
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
other,4-7-J05-1003,bq |
The
<term>
method
</term>
combined the
<term>
|
log-likelihood
|
</term>
under a
<term>
baseline model
</term>
|
#8803
The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,43-12-J05-1003,bq |
<term>
machine translation
</term>
, or
<term>
|
natural language generation
|
</term>
. We present a novel
<term>
method
</term>
|
#8979
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation. |
measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
tech,8-10-J05-1003,bq |
significant efficiency gains for the new
<term>
|
algorithm
|
</term>
over the obvious
<term>
implementation
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |
tech,13-5-J05-1003,bq |
reranking task
</term>
, based on the
<term>
|
boosting approach
|
</term>
to
<term>
ranking problems
</term>
described
|
#8772
We introduce a new method for the reranking task, based on theboosting approach to ranking problems described in Freund et al. (1998). |
other,7-2-J05-1003,bq |
<term>
parser
</term>
produces a set of
<term>
|
candidate parses
|
</term>
for each input
<term>
sentence
</term>
|
#8670
The base parser produces a set ofcandidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
tech,30-12-J05-1003,bq |
</term>
which are naturally framed as
<term>
|
ranking tasks
|
</term>
, for example ,
<term>
speech recognition
|
#8966
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
tech,1-7-J05-1003,bq |
Street Journal treebank
</term>
. The
<term>
|
method
|
</term>
combined the
<term>
log-likelihood
</term>
|
#8800
Themethod combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,4-5-J05-1003,bq |
</term>
into account . We introduce a new
<term>
|
method
|
</term>
for the
<term>
reranking task
</term>
|
#8763
We introduce a newmethod for the reranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
measure(ment),6-8-J05-1003,bq |
<term>
model
</term>
achieved 89.75 %
<term>
|
F-measure
|
</term>
, a 13 % relative decrease in
<term>
|
#8841
The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |