#8235The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data.
tech,8-12-J05-1003,ak
experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the approach should be applicable
#8309Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
other,19-9-J05-1003,ak
of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8245The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of thefeature space in the parsing data.
other,45-4-J05-1003,ak
generative model
</term>
which takes these
<term>
features
</term>
into account . We introduce a new
#8120The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes thesefeatures into account.
model,2-3-J05-1003,ak
these
<term>
parses
</term>
. A second
<term>
model
</term>
then attempts to improve upon this
#8056A secondmodel then attempts to improve upon this initial ranking, using additional features of the tree as evidence.
other,10-3-J05-1003,ak
attempts to improve upon this initial
<term>
ranking
</term>
, using additional
<term>
features
</term>
#8064A second model then attempts to improve upon this initialranking, using additional features of the tree as evidence.
measure(ment),6-8-J05-1003,ak
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
#8206The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
other,30-12-J05-1003,ak
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
#8331Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation.
model,2-8-J05-1003,ak
original
<term>
model
</term>
. The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
#8202The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
other,24-2-J05-1003,ak
initial
<term>
ranking
</term>
of these
<term>
parses
</term>
. A second
<term>
model
</term>
then
#8052The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses.
other,16-2-J05-1003,ak
input sentence
</term>
, with associated
<term>
probabilities
</term>
that define an initial
<term>
ranking
#8044The base parser produces a set of candidate parses for each input sentence, with associatedprobabilities that define an initial ranking of these parses.
tech,36-12-J05-1003,ak
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
#8337Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example,speech recognition, machine translation, or natural language generation.
model,40-4-J05-1003,ak
define a
<term>
derivation
</term>
or a
<term>
generative model
</term>
which takes these
<term>
features
</term>
#8115The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or agenerative model which takes these features into account.
tech,1-2-J05-1003,ak
<term>
probabilistic parser
</term>
. The
<term>
base parser
</term>
produces a set of
<term>
candidate
#8029Thebase parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
tech,6-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
#8232The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
other,10-4-J05-1003,ak
of our approach is that it allows a
<term>
tree
</term>
to be represented as an arbitrary
#8085The strength of our approach is that it allows atree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
model,34-7-J05-1003,ak
were not included in the original
<term>
model
</term>
. The new
<term>
model
</term>
achieved
#8198The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel.
measure(ment),14-8-J05-1003,ak
</term>
, a 13 % relative decrease in
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
#8214The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%.
other,23-9-J05-1003,ak
the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
. Experiments show significant efficiency
#8249The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data.
other,17-3-J05-1003,ak
additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence . The strength of our
#8071A second model then attempts to improve upon this initial ranking, using additional features of thetree as evidence.