other,12-7-J05-1003,bq |
<term>
baseline model
</term>
( that of
<term>
|
Collins [ 1999 ]
|
</term>
) with evidence from an additional
|
#8811
The method combined the log-likelihood under a baseline model (that ofCollins [ 1999 ]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,39-12-J05-1003,bq |
,
<term>
speech recognition
</term>
,
<term>
|
machine translation
|
</term>
, or
<term>
natural language generation
|
#8975
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition,machine translation, or natural language generation. |
measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
other,21-2-J05-1003,bq |
probabilities
</term>
that define an initial
<term>
|
ranking
|
</term>
of these
<term>
parses
</term>
. A second
|
#8684
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initialranking of these parses. |
other,12-2-J05-1003,bq |
candidate parses
</term>
for each input
<term>
|
sentence
|
</term>
, with associated
<term>
probabilities
|
#8675
The base parser produces a set of candidate parses for each inputsentence, with associated probabilities that define an initial ranking of these parses. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
other,16-2-J05-1003,bq |
<term>
sentence
</term>
, with associated
<term>
|
probabilities
|
</term>
that define an initial
<term>
ranking
|
#8679
The base parser produces a set of candidate parses for each input sentence, with associatedprobabilities that define an initial ranking of these parses. |
tech,21-11-J05-1003,bq |
simplicity and efficiency — to work on
<term>
|
feature selection methods
|
</term>
within
<term>
log-linear ( maximum-entropy
|
#8926
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |
other,16-9-J05-1003,bq |
</term>
which takes advantage of the
<term>
|
sparsity of the feature space
|
</term>
in the
<term>
parsing data
</term>
.
|
#8877
The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data. |
tech,4-4-J05-1003,bq |
</term>
as evidence . The strength of our
<term>
|
approach
|
</term>
is that it allows a
<term>
tree
</term>
|
#8714
The strength of ourapproach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,7-2-J05-1003,bq |
<term>
parser
</term>
produces a set of
<term>
|
candidate parses
|
</term>
for each input
<term>
sentence
</term>
|
#8670
The base parser produces a set ofcandidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
tech,7-5-J05-1003,bq |
introduce a new
<term>
method
</term>
for the
<term>
|
reranking task
|
</term>
, based on the
<term>
boosting approach
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
other,20-5-J05-1003,bq |
ranking problems
</term>
described in
<term>
|
Freund et al. ( 1998 )
|
</term>
. We apply the
<term>
boosting method
|
#8779
We introduce a new method for the reranking task, based on the boosting approach to ranking problems described inFreund et al. ( 1998 ). |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
other,10-4-J05-1003,bq |
approach
</term>
is that it allows a
<term>
|
tree
|
</term>
to be represented as an arbitrary
|
#8720
The strength of our approach is that it allows atree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
tech,43-12-J05-1003,bq |
<term>
machine translation
</term>
, or
<term>
|
natural language generation
|
</term>
. We present a novel
<term>
method
</term>
|
#8979
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation. |
other,23-12-J05-1003,bq |
should be applicable to many other
<term>
|
NLP problems
|
</term>
which are naturally framed as
<term>
|
#8959
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |