#8309Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
tech,21-11-J05-1003,ak
simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy
#8291We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models.
tech,43-12-J05-1003,ak
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
. We present a novel method for discovering
#8344Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation.
model,34-7-J05-1003,ak
were not included in the original
<term>
model
</term>
. The new
<term>
model
</term>
achieved
#8198The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel.
tech,23-12-J05-1003,ak
should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
#8324Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
other,25-7-J05-1003,ak
additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
#8189The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model.
model,2-3-J05-1003,ak
these
<term>
parses
</term>
. A second
<term>
model
</term>
then attempts to improve upon this
#8056A secondmodel then attempts to improve upon this initial ranking, using additional features of the tree as evidence.
tech,1-2-J05-1003,ak
<term>
probabilistic parser
</term>
. The
<term>
base parser
</term>
produces a set of
<term>
candidate
#8029Thebase parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
other,4-7-J05-1003,ak
treebank
</term>
. The method combined the
<term>
log-likelihood under a baseline model
</term>
( that of Collins [ 1999 ] ) with
#8168The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model.
tech,9-9-J05-1003,ak
a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
#8235The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data.
other,7-5-J05-1003,ak
We introduce a new method for the
<term>
reranking task
</term>
, based on the
<term>
boosting approach
#8131We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998).
other,23-9-J05-1003,ak
the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
. Experiments show significant efficiency
#8249The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data.
other,16-9-J05-1003,ak
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in
#8242The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data.
other,17-3-J05-1003,ak
additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence . The strength of our
#8071A second model then attempts to improve upon this initial ranking, using additional features of thetree as evidence.
tech,15-10-J05-1003,ak
the obvious implementation of the
<term>
boosting approach
</term>
. We argue that the method is an
#8267Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach.
other,19-9-J05-1003,ak
of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8245The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of thefeature space in the parsing data.
tech,13-5-J05-1003,ak
reranking task
</term>
, based on the
<term>
boosting approach to ranking problems
</term>
described in Freund et al. ( 1998
#8137We introduce a new method for the reranking task, based on theboosting approach to ranking problems described in Freund et al. (1998).
model,18-8-J05-1003,ak
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % . The article also
#8218The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over thebaseline model ’s score of 88.2%.
lr,8-6-J05-1003,ak
boosting method
</term>
to parsing the
<term>
Wall Street Journal treebank
</term>
. The method combined the
<term>
log-likelihood
#8159We apply the boosting method to parsing theWall Street Journal treebank.
other,26-4-J05-1003,ak
, without concerns about how these
<term>
features
</term>
interact or overlap and without the
#8101The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account.