measure(ment),21-7-H01-1058,bq |
to the
<term>
LM
</term>
with the best
<term>
|
confidence
|
</term>
. We describe a three-tiered approach
|
#1193
The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the bestconfidence. |
measure(ment),7-7-H01-1058,bq |
amounts to tagging
<term>
LMs
</term>
with
<term>
|
confidence measures
|
</term>
and picking the best
<term>
hypothesis
|
#1179
The method amounts to tagging LMs withconfidence measures and picking the best hypothesis corresponding to the LM with the best confidence. |
tech,17-6-H01-1058,bq |
using a
<term>
neural network
</term>
or a
<term>
|
decision tree
|
</term>
. The method amounts to tagging
<term>
|
#1169
We suggest a method that mimics the behavior of the oracle using a neural network or adecision tree. |
tech,7-4-H01-1058,bq |
the
<term>
oracle
</term>
acts like a
<term>
|
dynamic combiner
|
</term>
with
<term>
hard decisions
</term>
using
|
#1122
Actually, the oracle acts like adynamic combiner with hard decisions using the reference. |
tech,11-5-H01-1058,bq |
results that clearly show the need for a
<term>
|
dynamic language model combination
|
</term>
to improve the
<term>
performance
</term>
|
#1142
We provide experimental results that clearly show the need for adynamic language model combination to improve the performance further. |
other,10-4-H01-1058,bq |
a
<term>
dynamic combiner
</term>
with
<term>
|
hard decisions
|
</term>
using the
<term>
reference
</term>
.
|
#1125
Actually, the oracle acts like a dynamic combiner withhard decisions using the reference. |
other,13-7-H01-1058,bq |
measures
</term>
and picking the best
<term>
|
hypothesis
|
</term>
corresponding to the
<term>
LM
</term>
|
#1185
The method amounts to tagging LMs with confidence measures and picking the besthypothesis corresponding to the LM with the best confidence. |
tech,4-2-H01-1058,bq |
LMs )
</term>
. We find that simple
<term>
|
interpolation methods
|
</term>
, like
<term>
log-linear and linear
|
#1048
We find that simpleinterpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. |
model,11-1-H01-1058,bq |
address the problem of combining several
<term>
|
language models ( LMs )
|
</term>
. We find that simple
<term>
interpolation
|
#1038
In this paper, we address the problem of combining severallanguage models ( LMs ). |
model,43-3-H01-1058,bq |
been obtained by using a different
<term>
|
LM
|
</term>
. Actually , the
<term>
oracle
</term>
|
#1113
The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a differentLM. |
model,17-7-H01-1058,bq |
hypothesis
</term>
corresponding to the
<term>
|
LM
|
</term>
with the best
<term>
confidence
</term>
|
#1189
The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to theLM with the best confidence. |
model,5-7-H01-1058,bq |
</term>
. The method amounts to tagging
<term>
|
LMs
|
</term>
with
<term>
confidence measures
</term>
|
#1177
The method amounts to taggingLMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence. |
tech,8-2-H01-1058,bq |
interpolation methods
</term>
, like
<term>
|
log-linear and linear interpolation
|
</term>
, improve the
<term>
performance
</term>
|
#1052
We find that simple interpolation methods, likelog-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. |
tech,13-6-H01-1058,bq |
behavior of the
<term>
oracle
</term>
using a
<term>
|
neural network
|
</term>
or a
<term>
decision tree
</term>
. The
|
#1165
We suggest a method that mimics the behavior of the oracle using aneural network or a decision tree. |
other,24-2-H01-1058,bq |
of the
<term>
performance
</term>
of an
<term>
|
oracle
|
</term>
. The
<term>
oracle
</term>
knows the
|
#1068
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of anoracle. |
other,1-3-H01-1058,bq |
</term>
of an
<term>
oracle
</term>
. The
<term>
|
oracle
|
</term>
knows the
<term>
reference word string
|
#1071
Theoracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
other,3-4-H01-1058,bq |
different
<term>
LM
</term>
. Actually , the
<term>
|
oracle
|
</term>
acts like a
<term>
dynamic combiner
|
#1118
Actually, theoracle acts like a dynamic combiner with hard decisions using the reference. |
other,10-6-H01-1058,bq |
method that mimics the behavior of the
<term>
|
oracle
|
</term>
using a
<term>
neural network
</term>
|
#1162
We suggest a method that mimics the behavior of theoracle using a neural network or a decision tree. |
measure(ment),15-2-H01-1058,bq |
interpolation
</term>
, improve the
<term>
|
performance
|
</term>
but fall short of the
<term>
performance
|
#1059
We find that simple interpolation methods, like log-linear and linear interpolation, improve theperformance but fall short of the performance of an oracle. |
measure(ment),21-2-H01-1058,bq |
performance
</term>
but fall short of the
<term>
|
performance
|
</term>
of an
<term>
oracle
</term>
. The
<term>
|
#1065
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of theperformance of an oracle. |