measure(ment),21-7-H01-1058,ak The method amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term> .
measure(ment),7-7-H01-1058,ak The method amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term> .
tech,17-6-H01-1058,ak We suggest a method that mimics the behavior of the <term> oracle </term> using a <term> neural network </term> or a <term> decision tree </term> .
tech,7-4-H01-1058,ak Actually , the <term> oracle </term> acts like a <term> dynamic combiner </term> with hard decisions using the <term> reference </term> .
tech,11-5-H01-1058,ak We provide experimental results that clearly show the need for a <term> dynamic language model combination </term> to improve the <term> performance </term> further .
other,13-7-H01-1058,ak The method amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term> .
tech,4-2-H01-1058,ak We find that simple <term> interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> .
model,11-1-H01-1058,ak In this paper , we address the problem of combining several <term> language models ( LMs ) </term> .
model,43-3-H01-1058,ak The <term> oracle </term> knows the <term> reference word string </term> and selects the <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings </term> , where each <term> word string </term> has been obtained by using a different <term> LM </term> .
model,17-7-H01-1058,ak The method amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term> .
model,5-7-H01-1058,ak The method amounts to tagging <term> LMs </term> with <term> confidence measures </term> and picking the best <term> hypothesis </term> corresponding to the <term> LM </term> with the best <term> confidence </term> .
tech,8-2-H01-1058,ak We find that simple <term> interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> .
tech,13-6-H01-1058,ak We suggest a method that mimics the behavior of the <term> oracle </term> using a <term> neural network </term> or a <term> decision tree </term> .
tech,24-2-H01-1058,ak We find that simple <term> interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> .
tech,1-3-H01-1058,ak The <term> oracle </term> knows the <term> reference word string </term> and selects the <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings </term> , where each <term> word string </term> has been obtained by using a different <term> LM </term> .
tech,3-4-H01-1058,ak Actually , the <term> oracle </term> acts like a <term> dynamic combiner </term> with hard decisions using the <term> reference </term> .
tech,10-6-H01-1058,ak We suggest a method that mimics the behavior of the <term> oracle </term> using a <term> neural network </term> or a <term> decision tree </term> .
measure(ment),15-2-H01-1058,ak We find that simple <term> interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> .
measure(ment),21-2-H01-1058,ak We find that simple <term> interpolation methods </term> , like <term> log-linear and linear interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> .
measure(ment),15-3-H01-1058,ak The <term> oracle </term> knows the <term> reference word string </term> and selects the <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings </term> , where each <term> word string </term> has been obtained by using a different <term> LM </term> .
hide detail