tech,14-1-N03-1004,bq learning </term> and other areas of <term> natural language processing </term> , we developed a <term>
other,9-3-N03-1017,bq results , which hold for all examined <term> language pairs </term> , suggest that the highest
tech,6-1-N03-2003,bq <term> training data </term> suitable for <term> language modeling </term> of <term> conversational speech
model,28-1-N03-2006,bq corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual
model,27-3-N03-2006,bq </term> and the possibility of using the <term> language model </term> . We describe a simple <term>
model,11-3-N03-2036,bq model </term> and a <term> word-based trigram language model </term> . During <term> training </term>
tech,11-1-N03-3010,bq Cooperative Model </term> for <term> natural language understanding </term> in a <term> dialogue
tech,5-3-N03-3010,bq </term> provides two strategies for <term> language understanding </term> and have a high accuracy
tech,27-2-N03-4004,bq languages </term> by leveraging <term> human language technology </term> . The <term> JAVELIN system
tech,13-1-N03-4010,bq architecture </term> with a variety of <term> language processing modules </term> to provide an <term>
other,13-1-P03-1005,bq Kernel </term> for <term> structured natural language data </term> . The <term> HDAG Kernel </term>
other,16-5-P03-1050,bq the approach is applicable to any <term> language </term> that needs <term> affix removal </term>
model,4-3-P03-1051,bq <term> algorithm </term> uses a <term> trigram language model </term> to determine the most probable
model,1-4-P03-1051,bq for a given <term> input </term> . The <term> language model </term> is initially estimated from
other,30-7-P03-1051,bq manually segmented corpus </term> of the <term> language </term> of interest . A central problem of
other,8-1-C04-1103,bq role in many <term> multilingual speech and language applications </term> . In this paper , a
other,11-4-C04-1103,bq <term> English/Chinese and English/Japanese language pairs </term> . Our study reveals that the
other,11-5-C04-1147,bq terabyte corpus </term> to answer <term> natural language tests </term> , achieving encouraging results
other,31-3-N04-1022,bq parse-trees </term> of <term> source and target language sentences </term> . We report the performance
other,10-2-I05-2014,bq scarcely used for the assessment of <term> language pairs </term> like <term> English-Chinese </term>
hide detail