other,37-4-J05-1003,bq |
overlap and without the need to define a
<term>
|
derivation
|
</term>
or a
<term>
generative model
</term>
|
#8747
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define aderivation or a generative model which takes these features into account. |
tech,7-5-J05-1003,bq |
introduce a new
<term>
method
</term>
for the
<term>
|
reranking task
|
</term>
, based on the
<term>
boosting approach
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
tech,8-12-J05-1003,bq |
experiments in this article are on
<term>
|
natural language parsing ( NLP )
|
</term>
, the
<term>
approach
</term>
should
|
#8944
Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,2-8-J05-1003,bq |
original
<term>
model
</term>
. The new
<term>
|
model
|
</term>
achieved 89.75 %
<term>
F-measure
</term>
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
other,14-3-J05-1003,bq |
<term>
ranking
</term>
, using additional
<term>
|
features
|
</term>
of the
<term>
tree
</term>
as evidence
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
measure(ment),6-8-J05-1003,bq |
<term>
model
</term>
achieved 89.75 %
<term>
|
F-measure
|
</term>
, a 13 % relative decrease in
<term>
|
#8841
The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
other,26-4-J05-1003,bq |
, without concerns about how these
<term>
|
features
|
</term>
interact or overlap and without the
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,23-9-J05-1003,bq |
of the feature space
</term>
in the
<term>
|
parsing data
|
</term>
. Experiments show significant efficiency
|
#8884
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data. |
tech,30-12-J05-1003,bq |
</term>
which are naturally framed as
<term>
|
ranking tasks
|
</term>
, for example ,
<term>
speech recognition
|
#8966
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
tech,11-1-J05-1003,bq |
which rerank the output of an existing
<term>
|
probabilistic parser
|
</term>
. The base
<term>
parser
</term>
produces
|
#8660
This article considers approaches which rerank the output of an existingprobabilistic parser. |
tech,2-2-J05-1003,bq |
probabilistic parser
</term>
. The base
<term>
|
parser
|
</term>
produces a set of
<term>
candidate
|
#8665
The baseparser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
tech,8-10-J05-1003,bq |
significant efficiency gains for the new
<term>
|
algorithm
|
</term>
over the obvious
<term>
implementation
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |
other,16-5-J05-1003,bq |
the
<term>
boosting approach
</term>
to
<term>
|
ranking problems
|
</term>
described in
<term>
Freund et al. (
|
#8775
We introduce a new method for the reranking task, based on the boosting approach toranking problems described in Freund et al. (1998). |
tech,21-11-J05-1003,bq |
simplicity and efficiency — to work on
<term>
|
feature selection methods
|
</term>
within
<term>
log-linear ( maximum-entropy
|
#8926
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |
tech,9-9-J05-1003,bq |
a new
<term>
algorithm
</term>
for the
<term>
|
boosting approach
|
</term>
which takes advantage of the
<term>
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
measure(ment),14-8-J05-1003,bq |
</term>
, a 13 % relative decrease in
<term>
|
F-measure
|
</term>
error over the
<term>
baseline model
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
other,12-10-J05-1003,bq |
<term>
algorithm
</term>
over the obvious
<term>
|
implementation
|
</term>
of the
<term>
boosting approach
</term>
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
tech,25-11-J05-1003,bq |
feature selection methods
</term>
within
<term>
|
log-linear ( maximum-entropy ) models
|
</term>
. Although the experiments in this
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |