other,22-1-H01-1042,bq the <term> evaluation </term> of <term> human language learners </term> , to the <term> output </term>
other,12-2-H01-1042,bq provide information about both the <term> human language learning process </term> , the <term> translation
other,1-4-H01-1042,bq </term> of <term> MT output </term> . A <term> language learning experiment </term> showed that <term>
other,9-4-H01-1042,bq differentiate <term> native from non-native language essays </term> in less than 100 <term> words
tech,3-2-H01-1049,bq sources </term> . We integrate a <term> spoken language understanding system </term> with <term> intelligent
other,13-3-H01-1055,bq extensively studied by the <term> natural language generation community </term> , though rarely
model,11-1-H01-1058,bq address the problem of combining several <term> language models ( LMs ) </term> . We find that simple
tech,11-5-H01-1058,bq clearly show the need for a <term> dynamic language model combination </term> to improve the <term>
tech,17-1-H01-1070,bq key prediction </term> and <term> Thai-English language identification </term> . The paper also proposes
tech,10-3-H01-1070,bq than 99 % <term> accuracy </term> in both <term> language identification </term> and <term> key prediction
other,10-5-P01-1007,bq of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term>
tech,6-1-P01-1008,bq interpretation and generation of natural language </term> , current systems use manual or semi-automatic
tech,14-2-P01-1009,bq attention </term> , yet present <term> natural language search engines </term> perform poorly on <term>
tech,12-4-P01-1009,bq operational semantics </term> of <term> natural language applications </term> improve , even larger
tech,7-1-P01-1056,bq training </term> modules of a <term> natural language generator </term> have recently been proposed
other,11-4-N03-1001,bq evaluated on three different <term> spoken language system domains </term> . Motivated by the
tech,14-1-N03-1004,bq learning </term> and other areas of <term> natural language processing </term> , we developed a <term>
other,9-3-N03-1017,bq results , which hold for all examined <term> language pairs </term> , suggest that the highest
tech,6-1-N03-2003,bq <term> training data </term> suitable for <term> language modeling </term> of <term> conversational speech
model,28-1-N03-2006,bq corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual
hide detail