probabilities
</term>
that define an initial
<term>
ranking
</term>
of these
<term>
parses
</term>
. A second
#8049The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
model,2-3-J05-1003,ak
these
<term>
parses
</term>
. A second
<term>
model
</term>
then attempts to improve upon this
#8056A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence.
other,10-3-J05-1003,ak
attempts to improve upon this initial
<term>
ranking
</term>
, using additional
<term>
features
#8064A second model then attempts to improve upon this initial ranking , using additional features of the tree as evidence.
other,17-3-J05-1003,ak
additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence . The strength of our
#8071A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence.
other,10-4-J05-1003,ak
our approach is that it allows a
<term>
tree
</term>
to be represented as an arbitrary
#8085The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
other,37-4-J05-1003,ak
and without the need to define a
<term>
derivation
</term>
or a
<term>
generative model
</term>
#8112The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
model,34-7-J05-1003,ak
were not included in the original
<term>
model
</term>
. The new
<term>
model
</term>
achieved
#8198The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model .
model,2-8-J05-1003,ak
original
<term>
model
</term>
. The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
#8202The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
measure(ment),6-8-J05-1003,ak
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
#8206The new model achieved 89.75% F-measure , a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
tech,6-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
#8232The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
other,16-9-J05-1003,ak
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
#8242The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
tech,8-10-J05-1003,ak
significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious implementation
#8260Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach.