tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
tech,8-10-J05-1003,bq |
significant efficiency gains for the new
<term>
|
algorithm
|
</term>
over the obvious
<term>
implementation
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
other,7-2-J05-1003,bq |
<term>
parser
</term>
produces a set of
<term>
|
candidate parses
|
</term>
for each input
<term>
sentence
</term>
|
#8670
The base parser produces a set ofcandidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,12-7-J05-1003,bq |
<term>
baseline model
</term>
( that of
<term>
|
Collins [ 1999 ]
|
</term>
) with evidence from an additional
|
#8811
The method combined the log-likelihood under a baseline model (that ofCollins [ 1999 ]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,8-12-J05-1003,bq |
experiments in this article are on
<term>
|
natural language parsing ( NLP )
|
</term>
, the
<term>
approach
</term>
should
|
#8944
Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,21-11-J05-1003,bq |
simplicity and efficiency — to work on
<term>
|
feature selection methods
|
</term>
within
<term>
log-linear ( maximum-entropy
|
#8926
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |
tech,43-12-J05-1003,bq |
<term>
machine translation
</term>
, or
<term>
|
natural language generation
|
</term>
. We present a novel
<term>
method
</term>
|
#8979
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation. |
model,34-7-J05-1003,bq |
were not included in the original
<term>
|
model
|
</term>
. The new
<term>
model
</term>
achieved
|
#8833
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel. |
other,23-12-J05-1003,bq |
should be applicable to many other
<term>
|
NLP problems
|
</term>
which are naturally framed as
<term>
|
#8959
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,4-4-J05-1003,bq |
</term>
as evidence . The strength of our
<term>
|
approach
|
</term>
is that it allows a
<term>
tree
</term>
|
#8714
The strength of ourapproach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,25-7-J05-1003,bq |
additional 500,000
<term>
features
</term>
over
<term>
|
parse trees
|
</term>
that were not included in the original
|
#8824
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model. |
tech,2-3-J05-1003,bq |
these
<term>
parses
</term>
. A second
<term>
|
model
|
</term>
then attempts to improve upon this
|
#8691
A secondmodel then attempts to improve upon this initial ranking, using additional features of the tree as evidence. |
tech,16-12-J05-1003,bq |
language parsing ( NLP )
</term>
, the
<term>
|
approach
|
</term>
should be applicable to many other
|
#8952
Although the experiments in this article are on natural language parsing (NLP), theapproach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,1-7-J05-1003,bq |
Street Journal treebank
</term>
. The
<term>
|
method
|
</term>
combined the
<term>
log-likelihood
</term>
|
#8800
Themethod combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,3-6-J05-1003,bq |
al. ( 1998 )
</term>
. We apply the
<term>
|
boosting method
|
</term>
to
<term>
parsing
</term>
the
<term>
Wall
|
#8789
We apply theboosting method to parsing the Wall Street Journal treebank. |
other,4-7-J05-1003,bq |
The
<term>
method
</term>
combined the
<term>
|
log-likelihood
|
</term>
under a
<term>
baseline model
</term>
|
#8803
The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,9-9-J05-1003,bq |
a new
<term>
algorithm
</term>
for the
<term>
|
boosting approach
|
</term>
which takes advantage of the
<term>
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
tech,7-5-J05-1003,bq |
introduce a new
<term>
method
</term>
for the
<term>
|
reranking task
|
</term>
, based on the
<term>
boosting approach
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |