other,37-4-J05-1003,bq |
The strength of our
<term>
approach
</term>
is that it allows a
<term>
tree
</term>
to be represented as an arbitrary set of
<term>
features
</term>
, without concerns about how these
<term>
features
</term>
interact or overlap and without the need to define a
<term>
derivation
</term>
or a
<term>
generative model
</term>
which takes these
<term>
features
</term>
into account .
|
#8747
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define aderivation or a generative model which takes these features into account. |
tech,7-5-J05-1003,bq |
We introduce a new
<term>
method
</term>
for the
<term>
reranking task
</term>
, based on the
<term>
boosting approach
</term>
to
<term>
ranking problems
</term>
described in
<term>
Freund et al. ( 1998 )
</term>
.
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
tech,8-12-J05-1003,bq |
Although the experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the
<term>
approach
</term>
should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
.
|
#8944
Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,2-8-J05-1003,bq |
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure
</term>
error over the
<term>
baseline model ’s score
</term>
of 88.2 % .
|
#8837
The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
other,14-3-J05-1003,bq |
A second
<term>
model
</term>
then attempts to improve upon this initial
<term>
ranking
</term>
, using additional
<term>
features
</term>
of the
<term>
tree
</term>
as evidence .
|
#8703
A second model then attempts to improve upon this initial ranking, using additionalfeatures of the tree as evidence. |
tech,15-10-J05-1003,bq |
Experiments show significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious
<term>
implementation
</term>
of the
<term>
boosting approach
</term>
.
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
measure(ment),6-8-J05-1003,bq |
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure
</term>
error over the
<term>
baseline model ’s score
</term>
of 88.2 % .
|
#8841
The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
other,26-4-J05-1003,bq |
The strength of our
<term>
approach
</term>
is that it allows a
<term>
tree
</term>
to be represented as an arbitrary set of
<term>
features
</term>
, without concerns about how these
<term>
features
</term>
interact or overlap and without the need to define a
<term>
derivation
</term>
or a
<term>
generative model
</term>
which takes these
<term>
features
</term>
into account .
|
#8736
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how thesefeatures interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,23-9-J05-1003,bq |
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity of the feature space
</term>
in the
<term>
parsing data
</term>
.
|
#8884
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data. |
tech,30-12-J05-1003,bq |
Although the experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the
<term>
approach
</term>
should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
.
|
#8966
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation. |
lr-prod,8-6-J05-1003,bq |
We apply the
<term>
boosting method
</term>
to
<term>
parsing
</term>
the
<term>
Wall Street Journal treebank
</term>
.
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
tech,11-1-J05-1003,bq |
This article considers approaches which rerank the output of an existing
<term>
probabilistic parser
</term>
.
|
#8660
This article considers approaches which rerank the output of an existingprobabilistic parser. |
tech,2-2-J05-1003,bq |
The base
<term>
parser
</term>
produces a set of
<term>
candidate parses
</term>
for each input
<term>
sentence
</term>
, with associated
<term>
probabilities
</term>
that define an initial
<term>
ranking
</term>
of these
<term>
parses
</term>
.
|
#8665
The baseparser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
tech,8-10-J05-1003,bq |
Experiments show significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious
<term>
implementation
</term>
of the
<term>
boosting approach
</term>
.
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |
other,16-5-J05-1003,bq |
We introduce a new
<term>
method
</term>
for the
<term>
reranking task
</term>
, based on the
<term>
boosting approach
</term>
to
<term>
ranking problems
</term>
described in
<term>
Freund et al. ( 1998 )
</term>
.
|
#8775
We introduce a new method for the reranking task, based on the boosting approach toranking problems described in Freund et al. (1998). |
tech,21-11-J05-1003,bq |
We argue that the method is an appealing alternative — in terms of both simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy ) models
</term>
.
|
#8926
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models. |
tech,9-9-J05-1003,bq |
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity of the feature space
</term>
in the
<term>
parsing data
</term>
.
|
#8870
The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
measure(ment),14-8-J05-1003,bq |
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure
</term>
error over the
<term>
baseline model ’s score
</term>
of 88.2 % .
|
#8849
The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%. |
other,12-10-J05-1003,bq |
Experiments show significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious
<term>
implementation
</term>
of the
<term>
boosting approach
</term>
.
|
#8899
Experiments show significant efficiency gains for the new algorithm over the obviousimplementation of the boosting approach. |
tech,25-11-J05-1003,bq |
We argue that the method is an appealing alternative — in terms of both simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy ) models
</term>
.
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |