#1193The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the bestconfidence.
measure(ment),7-7-H01-1058,ak
amounts to tagging
<term>
LMs
</term>
with
<term>
confidence measures
</term>
and picking the best
<term>
hypothesis
#1179The method amounts to tagging LMs withconfidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
tech,17-6-H01-1058,ak
using a
<term>
neural network
</term>
or a
<term>
decision tree
</term>
. The method amounts to tagging
<term>
#1169We suggest a method that mimics the behavior of the oracle using a neural network or adecision tree.
tech,7-4-H01-1058,ak
the
<term>
oracle
</term>
acts like a
<term>
dynamic combiner
</term>
with hard decisions using the
<term>
#1122Actually, the oracle acts like adynamic combiner with hard decisions using the reference.
tech,11-5-H01-1058,ak
results that clearly show the need for a
<term>
dynamic language model combination
</term>
to improve the
<term>
performance
</term>
#1142We provide experimental results that clearly show the need for adynamic language model combination to improve the performance further.
other,13-7-H01-1058,ak
measures
</term>
and picking the best
<term>
hypothesis
</term>
corresponding to the
<term>
LM
</term>
#1185The method amounts to tagging LMs with confidence measures and picking the besthypothesis corresponding to the LM with the best confidence.
tech,4-2-H01-1058,ak
LMs )
</term>
. We find that simple
<term>
interpolation methods
</term>
, like
<term>
log-linear and linear
#1048We find that simpleinterpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle.
model,11-1-H01-1058,ak
address the problem of combining several
<term>
language models ( LMs )
</term>
. We find that simple
<term>
interpolation
#1038In this paper, we address the problem of combining severallanguage models ( LMs ).
model,43-3-H01-1058,ak
been obtained by using a different
<term>
LM
</term>
. Actually , the
<term>
oracle
</term>
#1113The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a differentLM.
model,17-7-H01-1058,ak
hypothesis
</term>
corresponding to the
<term>
LM
</term>
with the best
<term>
confidence
</term>
#1189The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to theLM with the best confidence.
model,5-7-H01-1058,ak
</term>
. The method amounts to tagging
<term>
LMs
</term>
with
<term>
confidence measures
</term>
#1177The method amounts to taggingLMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
tech,8-2-H01-1058,ak
interpolation methods
</term>
, like
<term>
log-linear and linear interpolation
</term>
, improve the
<term>
performance
</term>
#1052We find that simple interpolation methods, likelog-linear and linear interpolation, improve the performance but fall short of the performance of an oracle.
tech,13-6-H01-1058,ak
behavior of the
<term>
oracle
</term>
using a
<term>
neural network
</term>
or a
<term>
decision tree
</term>
. The
#1165We suggest a method that mimics the behavior of the oracle using aneural network or a decision tree.
tech,24-2-H01-1058,ak
of the
<term>
performance
</term>
of an
<term>
oracle
</term>
. The
<term>
oracle
</term>
knows the
#1068We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of anoracle.
tech,1-3-H01-1058,ak
</term>
of an
<term>
oracle
</term>
. The
<term>
oracle
</term>
knows the
<term>
reference word string
#1071Theoracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
tech,3-4-H01-1058,ak
different
<term>
LM
</term>
. Actually , the
<term>
oracle
</term>
acts like a
<term>
dynamic combiner
#1118Actually, theoracle acts like a dynamic combiner with hard decisions using the reference.
tech,10-6-H01-1058,ak
method that mimics the behavior of the
<term>
oracle
</term>
using a
<term>
neural network
</term>
#1162We suggest a method that mimics the behavior of theoracle using a neural network or a decision tree.
measure(ment),15-2-H01-1058,ak
interpolation
</term>
, improve the
<term>
performance
</term>
but fall short of the
<term>
performance
#1059We find that simple interpolation methods, like log-linear and linear interpolation, improve theperformance but fall short of the performance of an oracle.
measure(ment),21-2-H01-1058,ak
performance
</term>
but fall short of the
<term>
performance
</term>
of an
<term>
oracle
</term>
. The
<term>
#1065We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of theperformance of an oracle.
measure(ment),15-3-H01-1058,ak
<term>
word string
</term>
with the best
<term>
performance
</term>
( typically ,
<term>
word or semantic
#1085The oracle knows the reference word string and selects the word string with the bestperformance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.