measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
other,20-5-J05-1003,bq |
ranking problems
</term>
described in
<term>
|
Freund et al. ( 1998 )
|
</term>
. We apply the
<term>
boosting method
|
#8779
We introduce a new method for the reranking task, based on the boosting approach to ranking problems described inFreund et al. ( 1998 ). |
tech,40-4-J05-1003,bq |
define a
<term>
derivation
</term>
or a
<term>
|
generative model
|
</term>
which takes these
<term>
features
</term>
|
#8750
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or agenerative model which takes these features into account. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
other,4-7-J05-1003,bq |
The
<term>
method
</term>
combined the
<term>
|
log-likelihood
|
</term>
under a
<term>
baseline model
</term>
|
#8803
The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,25-11-J05-1003,bq |
feature selection methods
</term>
within
<term>
|
log-linear ( maximum-entropy ) models
|
</term>
. Although the experiments in this
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |
tech,39-12-J05-1003,bq |
,
<term>
speech recognition
</term>
,
<term>
|
machine translation
|
</term>
, or
<term>
natural language generation
|
#8975
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition,machine translation, or natural language generation. |
tech,4-5-J05-1003,bq |
</term>
into account . We introduce a new
<term>
|
method
|
</term>
for the
<term>
reranking task
</term>
|
#8763
We introduce a newmethod for the reranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
tech,1-7-J05-1003,bq |
Street Journal treebank
</term>
. The
<term>
|
method
|
</term>
combined the
<term>
log-likelihood
</term>
|
#8800
Themethod combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,2-3-J05-1003,bq |
these
<term>
parses
</term>
. A second
<term>
|
model
|
</term>
then attempts to improve upon this
|
#8691
A secondmodel then attempts to improve upon this initial ranking, using additional features of the tree as evidence. |
model,34-7-J05-1003,bq |
were not included in the original
<term>
|
model
|
</term>
. The new
<term>
model
</term>
achieved
|
#8833
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel. |
tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
tech,43-12-J05-1003,bq |
<term>
machine translation
</term>
, or
<term>
|
natural language generation
|
</term>
. We present a novel
<term>
method
</term>
|
#8979
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation. |
tech,8-12-J05-1003,bq |
experiments in this article are on
<term>
|
natural language parsing ( NLP )
|
</term>
, the
<term>
approach
</term>
should
|
#8944
Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
other,23-12-J05-1003,bq |
should be applicable to many other
<term>
|
NLP problems
|
</term>
which are naturally framed as
<term>
|
#8959
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
other,25-7-J05-1003,bq |
additional 500,000
<term>
features
</term>
over
<term>
|
parse trees
|
</term>
that were not included in the original
|
#8824
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model. |
tech,2-2-J05-1003,bq |
probabilistic parser
</term>
. The base
<term>
|
parser
|
</term>
produces a set of
<term>
candidate
|
#8665
The baseparser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
other,24-2-J05-1003,bq |
initial
<term>
ranking
</term>
of these
<term>
|
parses
|
</term>
. A second
<term>
model
</term>
then
|
#8687
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses. |
tech,6-6-J05-1003,bq |
the
<term>
boosting method
</term>
to
<term>
|
parsing
|
</term>
the
<term>
Wall Street Journal treebank
|
#8792
We apply the boosting method toparsing the Wall Street Journal treebank. |
other,23-9-J05-1003,bq |
of the feature space
</term>
in the
<term>
|
parsing data
|
</term>
. Experiments show significant efficiency
|
#8884
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data. |