other,21-2-J05-1003,bq |
probabilities
</term>
that define an initial
<term>
|
ranking
|
</term>
of these
<term>
parses
</term>
. A second
|
#8684
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initialranking of these parses. |
other,16-5-J05-1003,bq |
the
<term>
boosting approach
</term>
to
<term>
|
ranking problems
|
</term>
described in
<term>
Freund et al. (
|
#8775
We introduce a new method for the reranking task, based on the boosting approach toranking problems described in Freund et al. (1998). |
tech,2-3-J05-1003,bq |
these
<term>
parses
</term>
. A second
<term>
|
model
|
</term>
then attempts to improve upon this
|
#8691
A secondmodel then attempts to improve upon this initial ranking, using additional features of the tree as evidence. |
other,10-4-J05-1003,bq |
approach
</term>
is that it allows a
<term>
|
tree
|
</term>
to be represented as an arbitrary
|
#8720
The strength of our approach is that it allows atree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
other,4-7-J05-1003,bq |
The
<term>
method
</term>
combined the
<term>
|
log-likelihood
|
</term>
under a
<term>
baseline model
</term>
|
#8803
The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
other,24-2-J05-1003,bq |
initial
<term>
ranking
</term>
of these
<term>
|
parses
|
</term>
. A second
<term>
model
</term>
then
|
#8687
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses. |
tech,25-11-J05-1003,bq |
feature selection methods
</term>
within
<term>
|
log-linear ( maximum-entropy ) models
|
</term>
. Although the experiments in this
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |
tech,21-11-J05-1003,bq |
simplicity and efficiency — to work on
<term>
|
feature selection methods
|
</term>
within
<term>
log-linear ( maximum-entropy
|
#8926
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |
other,23-7-J05-1003,bq |
evidence from an additional 500,000
<term>
|
features
|
</term>
over
<term>
parse trees
</term>
that
|
#8822
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model. |
tech,9-9-J05-1003,bq |
a new
<term>
algorithm
</term>
for the
<term>
|
boosting approach
|
</term>
which takes advantage of the
<term>
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
other,12-2-J05-1003,bq |
candidate parses
</term>
for each input
<term>
|
sentence
|
</term>
, with associated
<term>
probabilities
|
#8675
The base parser produces a set of candidate parses for each inputsentence, with associated probabilities that define an initial ranking of these parses. |
tech,40-4-J05-1003,bq |
define a
<term>
derivation
</term>
or a
<term>
|
generative model
|
</term>
which takes these
<term>
features
</term>
|
#8750
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or agenerative model which takes these features into account. |
other,12-7-J05-1003,bq |
<term>
baseline model
</term>
( that of
<term>
|
Collins [ 1999 ]
|
</term>
) with evidence from an additional
|
#8811
The method combined the log-likelihood under a baseline model (that ofCollins [ 1999 ]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
other,20-5-J05-1003,bq |
ranking problems
</term>
described in
<term>
|
Freund et al. ( 1998 )
|
</term>
. We apply the
<term>
boosting method
|
#8779
We introduce a new method for the reranking task, based on the boosting approach to ranking problems described inFreund et al. ( 1998 ). |