Negative filter
probabilistic, parser 43
(1,376.7 per million)
other,4-7-J05-1003,ak
The method combined the
<term>
log-likelihood under a baseline model
</term>
( that of Collins [ 1999 ] ) with evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
<term>
model
</term>
.
#8168The method combined thelog-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model.
other,23-7-J05-1003,ak
The method combined the
<term>
log-likelihood under a baseline model
</term>
( that of Collins [ 1999 ] ) with evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
<term>
model
</term>
.
#8187The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000features over parse trees that were not included in the original model.
other,25-7-J05-1003,ak
The method combined the
<term>
log-likelihood under a baseline model
</term>
( that of Collins [ 1999 ] ) with evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
<term>
model
</term>
.
#8189The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features overparse trees that were not included in the original model.
model,34-7-J05-1003,ak
The method combined the
<term>
log-likelihood under a baseline model
</term>
( that of Collins [ 1999 ] ) with evidence from an additional 500,000
<term>
features
</term>
over
<term>
parse trees
</term>
that were not included in the original
<term>
model
</term>
.
#8198The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the originalmodel.
model,2-8-J05-1003,ak
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % .
#8202The newmodel achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
measure(ment),6-8-J05-1003,ak
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % .
#8206The new model achieved 89.75%F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
measure(ment),14-8-J05-1003,ak
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % .
#8214The new model achieved 89.75% F-measure, a 13% relative decrease inF-measure error over the baseline model’s score of 88.2%.
model,18-8-J05-1003,ak
The new
<term>
model
</term>
achieved 89.75 %
<term>
F-measure
</term>
, a 13 % relative decrease in
<term>
F-measure error
</term>
over the
<term>
baseline model ’s
</term>
score of 88.2 % .
#8218The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over thebaseline model ’s score of 88.2%.
tech,6-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8232The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
tech,9-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8235The article also introduces a new algorithm for theboosting approach which takes advantage of the sparsity of the feature space in the parsing data.
other,16-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8242The article also introduces a new algorithm for the boosting approach which takes advantage of thesparsity of the feature space in the parsing data.
other,19-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8245The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of thefeature space in the parsing data.
other,23-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
which takes advantage of the
<term>
sparsity
</term>
of the
<term>
feature space
</term>
in the
<term>
parsing data
</term>
.
#8249The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in theparsing data.
tech,8-10-J05-1003,ak
Experiments show significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious implementation of the
<term>
boosting approach
</term>
.
#8260Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach.
tech,15-10-J05-1003,ak
Experiments show significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious implementation of the
<term>
boosting approach
</term>
.
#8267Experiments show significant efficiency gains for the new algorithm over the obvious implementation of theboosting approach.
tech,21-11-J05-1003,ak
We argue that the method is an appealing alternative — in terms of both simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy ) models
</term>
.
#8291We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work onfeature selection methods within log-linear (maximum-entropy) models.
model,25-11-J05-1003,ak
We argue that the method is an appealing alternative — in terms of both simplicity and efficiency — to work on
<term>
feature selection methods
</term>
within
<term>
log-linear ( maximum-entropy ) models
</term>
.
#8295We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods withinlog-linear ( maximum-entropy ) models.
tech,8-12-J05-1003,ak
Although the experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the approach should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
.
#8309Although the experiments in this article are onnatural language parsing ( NLP ), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
tech,23-12-J05-1003,ak
Although the experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the approach should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
.
#8324Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many otherNLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
other,30-12-J05-1003,ak
Although the experiments in this article are on
<term>
natural language parsing ( NLP )
</term>
, the approach should be applicable to many other
<term>
NLP problems
</term>
which are naturally framed as
<term>
ranking tasks
</term>
, for example ,
<term>
speech recognition
</term>
,
<term>
machine translation
</term>
, or
<term>
natural language generation
</term>
.
#8331Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed asranking tasks, for example, speech recognition, machine translation, or natural language generation.