measure(ment),6-8-J05-1003,bq |
<term>
model
</term>
achieved 89.75 %
<term>
|
F-measure
|
</term>
, a 13 % relative decrease in
<term>
|
#8841
The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%. |
tech,7-5-J05-1003,bq |
introduce a new
<term>
method
</term>
for the
<term>
|
reranking task
|
</term>
, based on the
<term>
boosting approach
|
#8766
We introduce a new method for thereranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). |
tech,30-12-J05-1003,bq |
</term>
which are naturally framed as
<term>
|
ranking tasks
|
</term>
, for example ,
<term>
speech recognition
|
#8966
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation. |
tech,36-12-J05-1003,bq |
ranking tasks
</term>
, for example ,
<term>
|
speech recognition
|
</term>
,
<term>
machine translation
</term>
|
#8972
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example,speech recognition, machine translation, or natural language generation. |
tech,39-12-J05-1003,bq |
,
<term>
speech recognition
</term>
,
<term>
|
machine translation
|
</term>
, or
<term>
natural language generation
|
#8975
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition,machine translation, or natural language generation. |
tech,8-12-J05-1003,bq |
experiments in this article are on
<term>
|
natural language parsing ( NLP )
|
</term>
, the
<term>
approach
</term>
should
|
#8944
Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation. |
other,10-3-J05-1003,bq |
attempts to improve upon this initial
<term>
|
ranking
|
</term>
, using additional
<term>
features
</term>
|
#8699
A second model then attempts to improve upon this initialranking, using additional features of the tree as evidence. |
other,12-2-J05-1003,bq |
candidate parses
</term>
for each input
<term>
|
sentence
|
</term>
, with associated
<term>
probabilities
|
#8675
The base parser produces a set of candidate parses for each inputsentence, with associated probabilities that define an initial ranking of these parses. |
other,19-4-J05-1003,bq |
represented as an arbitrary set of
<term>
|
features
|
</term>
, without concerns about how these
|
#8729
The strength of our approach is that it allows a tree to be represented as an arbitrary set offeatures, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
other,24-2-J05-1003,bq |
initial
<term>
ranking
</term>
of these
<term>
|
parses
|
</term>
. A second
<term>
model
</term>
then
|
#8687
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of theseparses. |
tech,25-11-J05-1003,bq |
feature selection methods
</term>
within
<term>
|
log-linear ( maximum-entropy ) models
|
</term>
. Although the experiments in this
|
#8930
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models. |
other,23-9-J05-1003,bq |
of the feature space
</term>
in the
<term>
|
parsing data
|
</term>
. Experiments show significant efficiency
|
#8884
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data. |
tech,11-1-J05-1003,bq |
which rerank the output of an existing
<term>
|
probabilistic parser
|
</term>
. The base
<term>
parser
</term>
produces
|
#8660
This article considers approaches which rerank the output of an existingprobabilistic parser. |
lr-prod,8-6-J05-1003,bq |
method
</term>
to
<term>
parsing
</term>
the
<term>
|
Wall Street Journal treebank
|
</term>
. The
<term>
method
</term>
combined
|
#8794
We apply the boosting method to parsing theWall Street Journal treebank. |
model,34-7-J05-1003,bq |
were not included in the original
<term>
|
model
|
</term>
. The new
<term>
model
</term>
achieved
|
#8833
The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel. |
other,20-5-J05-1003,bq |
ranking problems
</term>
described in
<term>
|
Freund et al. ( 1998 )
|
</term>
. We apply the
<term>
boosting method
|
#8779
We introduce a new method for the reranking task, based on the boosting approach to ranking problems described inFreund et al. ( 1998 ). |
tech,15-10-J05-1003,bq |
obvious
<term>
implementation
</term>
of the
<term>
|
boosting approach
|
</term>
. We argue that the method is an
|
#8902
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach. |
tech,43-12-J05-1003,bq |
<term>
machine translation
</term>
, or
<term>
|
natural language generation
|
</term>
. We present a novel
<term>
method
</term>
|
#8979
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, ornatural language generation. |
model,7-7-J05-1003,bq |
<term>
log-likelihood
</term>
under a
<term>
|
baseline model
|
</term>
( that of
<term>
Collins [ 1999 ]
</term>
|
#8806
The method combined the log-likelihood under abaseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |
other,12-7-J05-1003,bq |
<term>
baseline model
</term>
( that of
<term>
|
Collins [ 1999 ]
|
</term>
) with evidence from an additional
|
#8811
The method combined the log-likelihood under a baseline model (that ofCollins [ 1999 ]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. |