Negative filter
maximum, likelihood, criterion 16
(512.3 per million)
tech,8-1-P05-1069,ak
In this paper , we present a novel
<term>
training method
</term>
for a
<term>
localized phrase-based prediction model
</term>
for
<term>
statistical machine translation ( SMT )
</term>
.
#9937In this paper, we present a noveltraining method for a localized phrase-based prediction model for statistical machine translation (SMT).
model,12-1-P05-1069,ak
In this paper , we present a novel
<term>
training method
</term>
for a
<term>
localized phrase-based prediction model
</term>
for
<term>
statistical machine translation ( SMT )
</term>
.
#9941In this paper, we present a novel training method for alocalized phrase-based prediction model for statistical machine translation (SMT).
tech,17-1-P05-1069,ak
In this paper , we present a novel
<term>
training method
</term>
for a
<term>
localized phrase-based prediction model
</term>
for
<term>
statistical machine translation ( SMT )
</term>
.
#9946In this paper, we present a novel training method for a localized phrase-based prediction model forstatistical machine translation ( SMT ).
model,1-2-P05-1069,ak
The
<term>
model
</term>
predicts
<term>
blocks with orientation
</term>
to handle
<term>
local phrase re-ordering
</term>
.
#9954Themodel predicts blocks with orientation to handle local phrase re-ordering.
other,3-2-P05-1069,ak
The
<term>
model
</term>
predicts
<term>
blocks with orientation
</term>
to handle
<term>
local phrase re-ordering
</term>
.
#9956The model predictsblocks with orientation to handle local phrase re-ordering.
tech,8-2-P05-1069,ak
The
<term>
model
</term>
predicts
<term>
blocks with orientation
</term>
to handle
<term>
local phrase re-ordering
</term>
.
#9961The model predicts blocks with orientation to handlelocal phrase re-ordering.
model,9-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#9974We use a maximum likelihood criterion to train alog-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features.
other,15-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#9980We use a maximum likelihood criterion to train a log-linear block bigram model which usesreal-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g. block bigram features.
measure(ment),20-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#9985We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. alanguage model score) as well as binary features based on the block identities themselves, e.g. block bigram features.
other,27-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#9992We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well asbinary features based on the block identities themselves, e.g. block bigram features.
other,32-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#9997We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on theblock identities themselves, e.g. block bigram features.
other,37-3-P05-1069,ak
We use a
<term>
maximum likelihood criterion
</term>
to train a
<term>
log-linear block bigram model
</term>
which uses
<term>
real-valued features
</term>
( e.g. a
<term>
language model score
</term>
) as well as
<term>
binary features
</term>
based on the
<term>
block identities
</term>
themselves , e.g.
<term>
block bigram features
</term>
.
#10002We use a maximum likelihood criterion to train a log-linear block bigram model which uses real-valued features (e.g. a language model score) as well as binary features based on the block identities themselves, e.g.block bigram features.
tech,1-4-P05-1069,ak
Our
<term>
training algorithm
</term>
can easily handle millions of
<term>
features
</term>
.
#10007Ourtraining algorithm can easily handle millions of features.
other,8-4-P05-1069,ak
Our
<term>
training algorithm
</term>
can easily handle millions of
<term>
features
</term>
.
#10014Our training algorithm can easily handle millions offeatures.
other,10-5-P05-1069,ak
The best system obtains a 18.6 % improvement over the
<term>
baseline
</term>
on a standard
<term>
Arabic-English translation task
</term>
.
#10026The best system obtains a 18.6% improvement over thebaseline on a standard Arabic-English translation task.
other,14-5-P05-1069,ak
The best system obtains a 18.6 % improvement over the
<term>
baseline
</term>
on a standard
<term>
Arabic-English translation task
</term>
.
#10030The best system obtains a 18.6% improvement over the baseline on a standardArabic-English translation task.