measure(ment),21-2-H01-1058,bq performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> . The <term>
measure(ment),19-3-H01-1058,bq <term> performance </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings
other,29-3-H01-1058,bq error rate </term> ) from a list of <term> word strings </term> , where each <term> word string </term>
tech,7-4-H01-1058,bq the <term> oracle </term> acts like a <term> dynamic combiner </term> with <term> hard decisions </term> using
measure(ment),7-7-H01-1058,bq amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis
other,10-4-H01-1058,bq a <term> dynamic combiner </term> with <term> hard decisions </term> using the <term> reference </term> .
other,14-4-H01-1058,bq <term> hard decisions </term> using the <term> reference </term> . We provide experimental results
tech,4-2-H01-1058,bq LMs ) </term> . We find that simple <term> interpolation methods </term> , like <term> log-linear and linear
model,17-7-H01-1058,bq hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term>
measure(ment),15-3-H01-1058,bq <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic
measure(ment),15-2-H01-1058,bq interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance
tech,17-6-H01-1058,bq using a <term> neural network </term> or a <term> decision tree </term> . The method amounts to tagging <term>
measure(ment),18-5-H01-1058,bq model combination </term> to improve the <term> performance </term> further . We suggest a method that
tech,8-2-H01-1058,bq interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term>
model,11-1-H01-1058,bq address the problem of combining several <term> language models ( LMs ) </term> . We find that simple <term> interpolation
model,5-7-H01-1058,bq </term> . The method amounts to tagging <term> LMs </term> with <term> confidence measures </term>
other,13-7-H01-1058,bq measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term>
tech,11-5-H01-1058,bq results that clearly show the need for a <term> dynamic language model combination </term> to improve the <term> performance </term>
model,43-3-H01-1058,bq been obtained by using a different <term> LM </term> . Actually , the <term> oracle </term>
other,10-3-H01-1058,bq word string </term> and selects the <term> word string </term> with the best <term> performance </term>
hide detail