tech,11-5-H01-1058,bq show the need for a <term> dynamic language model combination </term> to improve the <term> performance
model,12-3-N03-1001,bq first used to train a <term> phone n-gram model </term> for a particular <term> domain </term>
model,4-1-N03-1017,bq propose a new <term> phrase-based translation model </term> and <term> decoding algorithm </term>
model,7-1-N03-1018,bq probabilistic optical character recognition ( OCR ) model </term> that describes an end-to-end process
model,23-2-N03-1026,bq forests </term> , and a <term> maximum-entropy model </term> for <term> stochastic output selection
model,28-1-N03-2006,bq </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual
model,7-1-N03-2036,bq we describe a <term> phrase-based unigram model </term> for <term> statistical machine translation
model,16-2-P03-1033,bq kinds of <term> users </term> , the <term> user model </term> we propose is more comprehensive
model,1-2-P03-1050,bq Arabic ) stemmer </term> . The <term> stemming model </term> is based on <term> statistical machine
model,8-1-P03-1051,bq Arabic 's rich morphology </term> by a <term> model </term> that a <term> word </term> consists of
model,5-3-C04-1103,bq <term> joint source-channel transliteration model </term> , also called <term> n-gram transliteration
tech,2-4-C04-1112,bq algorithm . Testing the <term> lemma-based model </term> on the <term> Dutch SENSEVAL-2 test
model,20-2-C04-1147,bq <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity
tech,19-4-N04-4028,bq field ( CRF ) </term> , a <term> probabilistic model </term> which has performed well on <term>
model,1-3-H05-1012,bq performance </term> . The <term> probabilistic model </term> used in the <term> alignment </term>
tech,1-3-H05-1095,bq proposed . A <term> statistical translation model </term> is also presented that deals such
model,18-1-I05-2021,bq representative <term> Chinese-to-English SMT model </term> directly on <term> word sense disambiguation
tech,2-3-J05-1003,bq these <term> parses </term> . A second <term> model </term> then attempts to improve upon this
model,25-3-P05-1034,bq </term> , and train a <term> tree-based ordering model </term> . We describe an efficient <term> decoder
tech,3-3-P05-1048,bq state-of-the-art <term> Chinese word sense disambiguation model </term> to choose <term> translation candidates
hide detail