#8115The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or agenerative model which takes these features into account.
model,18-8-J05-1003,ak
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % . The article also
#8218The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over thebaseline model ’s score of 88.2%.
other,45-4-J05-1003,ak
generative model
</term>
which takes these
<term>
features
</term>
into account . We introduce a new
#8120The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account.
tech,11-1-J05-1003,ak
which rerank the output of an existing
<term>
probabilistic parser
</term>
. The
<term>
base parser
</term>
produces
#8025This article considers approaches which rerank the output of an existingprobabilistic parser.
tech,43-12-J05-1003,ak
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
. We present a novel method for discovering
#8344Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation.
other,37-4-J05-1003,ak
overlap and without the need to define a
<term>
derivation
</term>
or a
<term>
generative model
</term>
#8112The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define aderivation or a generative model which takes these features into account.
other,17-3-J05-1003,ak
additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence . The strength of our
#8071A second model then attempts to improve upon this initial ranking, using additional features of thetree as evidence.
other,23-7-J05-1003,ak
evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that
#8187The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model.
other,19-4-J05-1003,ak
represented as an arbitrary set of
<term>
features
</term>
, without concerns about how these
#8094The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
measure(ment),6-8-J05-1003,ak
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
#8206The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
tech,15-10-J05-1003,ak
the obvious implementation of the
<term>
boosting approach
</term>
. We argue that the method is an
#8267Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach.
other,16-2-J05-1003,ak
input sentence
</term>
, with associated
<term>
probabilities
</term>
that define an initial
<term>
ranking
#8044The base parser produces a set of candidate parses for each input sentence, with associatedprobabilities that define an initial ranking of these parses.
other,25-7-J05-1003,ak
additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
#8189The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model.
tech,21-11-J05-1003,ak
simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy
#8291We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models.
other,14-3-J05-1003,ak
<term>
ranking
</term>
, using additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence
#8068A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence.
other,26-4-J05-1003,ak
, without concerns about how these
<term>
features
</term>
interact or overlap and without the
#8101The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
tech,6-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
#8232The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
other,24-2-J05-1003,ak
initial
<term>
ranking
</term>
of these
<term>
parses
</term>
. A second
<term>
model
</term>
then
#8052The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses.
model,2-8-J05-1003,ak
original
<term>
model
</term>
. The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
#8202The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
tech,8-10-J05-1003,ak
significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious implementation of
#8260Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach.