#1059We find that simple interpolation methods, like log-linear and linear interpolation, improve theperformance but fall short of the performance of an oracle.
measure(ment),15-3-H01-1058,ak
<term>
word string
</term>
with the best
<term>
performance
</term>
( typically ,
<term>
word or semantic
#1085The oracle knows the reference word string and selects the word string with the bestperformance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
measure(ment),18-5-H01-1058,ak
model combination
</term>
to improve the
<term>
performance
</term>
further . We suggest a method that
#1149We provide experimental results that clearly show the need for a dynamic language model combination to improve theperformance further.
measure(ment),19-3-H01-1058,ak
<term>
performance
</term>
( typically ,
<term>
word or semantic error rate
</term>
) from a list of
<term>
word strings
#1089The oracle knows the reference word string and selects the word string with the best performance (typically,word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
measure(ment),21-2-H01-1058,ak
performance
</term>
but fall short of the
<term>
performance
</term>
of an
<term>
oracle
</term>
. The
<term>
#1065We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of theperformance of an oracle.
measure(ment),21-7-H01-1058,ak
to the
<term>
LM
</term>
with the best
<term>
confidence
</term>
. We describe a three-tiered approach
#1193The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the bestconfidence.
measure(ment),7-7-H01-1058,ak
amounts to tagging
<term>
LMs
</term>
with
<term>
confidence measures
</term>
and picking the best
<term>
hypothesis
#1179The method amounts to tagging LMs withconfidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
model,11-1-H01-1058,ak
address the problem of combining several
<term>
language models ( LMs )
</term>
. We find that simple
<term>
interpolation
#1038In this paper, we address the problem of combining severallanguage models ( LMs ).
model,14-4-H01-1058,ak
</term>
with hard decisions using the
<term>
reference
</term>
. We provide experimental results
#1129Actually, the oracle acts like a dynamic combiner with hard decisions using thereference.
model,17-7-H01-1058,ak
hypothesis
</term>
corresponding to the
<term>
LM
</term>
with the best
<term>
confidence
</term>
#1189The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to theLM with the best confidence.
model,43-3-H01-1058,ak
been obtained by using a different
<term>
LM
</term>
. Actually , the
<term>
oracle
</term>
#1113The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a differentLM.
model,5-7-H01-1058,ak
</term>
. The method amounts to tagging
<term>
LMs
</term>
with
<term>
confidence measures
</term>
#1177The method amounts to taggingLMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
other,10-3-H01-1058,ak
word string
</term>
and selects the
<term>
word string
</term>
with the best
<term>
performance
</term>
#1080The oracle knows the reference word string and selects theword string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
other,13-7-H01-1058,ak
measures
</term>
and picking the best
<term>
hypothesis
</term>
corresponding to the
<term>
LM
</term>
#1185The method amounts to tagging LMs with confidence measures and picking the besthypothesis corresponding to the LM with the best confidence.
other,29-3-H01-1058,ak
error rate
</term>
) from a list of
<term>
word strings
</term>
, where each
<term>
word string
</term>
#1099The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list ofword strings, where each word string has been obtained by using a different LM.
other,34-3-H01-1058,ak
<term>
word strings
</term>
, where each
<term>
word string
</term>
has been obtained by using a different
#1104The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where eachword string has been obtained by using a different LM.
other,4-3-H01-1058,ak
</term>
. The
<term>
oracle
</term>
knows the
<term>
reference word string
</term>
and selects the
<term>
word string
</term>
#1074The oracle knows thereference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
tech,1-3-H01-1058,ak
</term>
of an
<term>
oracle
</term>
. The
<term>
oracle
</term>
knows the
<term>
reference word string
#1071Theoracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
tech,10-6-H01-1058,ak
method that mimics the behavior of the
<term>
oracle
</term>
using a
<term>
neural network
</term>
#1162We suggest a method that mimics the behavior of theoracle using a neural network or a decision tree.
tech,11-5-H01-1058,ak
results that clearly show the need for a
<term>
dynamic language model combination
</term>
to improve the
<term>
performance
</term>
#1142We provide experimental results that clearly show the need for adynamic language model combination to improve the performance further.