other,27-5-P03-1051,bq corpus </term> , and re-estimate the <term> model parameters </term> with the expanded <term>
model,5-3-C04-1103,bq <term> joint source-channel transliteration model </term> , also called <term> n-gram transliteration
model,12-3-C04-1103,bq also called <term> n-gram transliteration model ( ngram TM ) </term> , is further proposed
ngram TM ) </term> , is further proposed to model the <term> transliteration process </term>
tech,2-4-C04-1112,bq algorithm . Testing the <term> lemma-based model </term> on the <term> Dutch SENSEVAL-2 test
tech,20-4-C04-1112,bq <term> accuracy </term> over the <term> wordform model </term> . Also , the <term> WSD system based
model,20-2-C04-1147,bq <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity
model,25-2-C04-1147,bq model </term> , and a <term> parametric affinity model </term> . In comparison with previous <term>
tech,19-4-N04-4028,bq field ( CRF ) </term> , a <term> probabilistic model </term> which has performed well on <term>
model,44-4-N04-4028,bq features </term> of the input in a <term> Markov model </term> . We implement several techniques
model,1-3-H05-1012,bq performance </term> . The <term> probabilistic model </term> used in the <term> alignment </term>
tech,1-3-H05-1095,bq proposed . A <term> statistical translation model </term> is also presented that deals such
model,18-1-I05-2021,bq representative <term> Chinese-to-English SMT model </term> directly on <term> word sense disambiguation
tech,2-3-J05-1003,bq these <term> parses </term> . A second <term> model </term> then attempts to improve upon this
tech,40-4-J05-1003,bq <term> derivation </term> or a <term> generative model </term> which takes these <term> features </term>
model,7-7-J05-1003,bq log-likelihood </term> under a <term> baseline model </term> ( that of <term> Collins [ 1999 ] </term>
model,34-7-J05-1003,bq were not included in the original <term> model </term> . The new <term> model </term> achieved
tech,2-8-J05-1003,bq original <term> model </term> . The new <term> model </term> achieved 89.75 % <term> F-measure </term>
measure(ment),18-8-J05-1003,bq F-measure </term> error over the <term> baseline model ’s score </term> of 88.2 % . The article
model,25-3-P05-1034,bq </term> , and train a <term> tree-based ordering model </term> . We describe an efficient <term> decoder
hide detail