model,5-3-C04-1103,bq <term> joint source-channel transliteration model </term> , also called <term> n-gram transliteration
model,20-2-C04-1147,bq <term> terms </term> , an <term> independence model </term> , and a <term> parametric affinity
response </term> . We have already proposed a model , <term> TDMT ( Transfer-Driven Machine Translation
by <term> extrapolation </term> . Thus , our model , <term> TDMT on APs </term> , meets the vital
model,3-4-C92-4207,bq <term> world </term> . To reconstruct the <term> model </term> , the authors extract the <term> qualitative
model,14-2-C92-1055,bq error </term> introduced by the <term> language model </term> , traditional <term> statistical approaches
tech,4-5-A94-1007,bq <term> English coordinate structure analysis model </term> , which provides <term> top-down scope
tech,20-4-C04-1112,bq <term> accuracy </term> over the <term> wordform model </term> . Also , the <term> WSD system based
model,11-3-N03-2036,bq </term> and a <term> word-based trigram language model </term> . During <term> training </term> , the
model,25-2-C04-1147,bq model </term> , and a <term> parametric affinity model </term> . In comparison with previous <term>
model,14-1-C80-1073,bq Network </term> as a procedural <term> dialog model </term> . The development of such a <term>
model,34-7-J05-1003,bq were not included in the original <term> model </term> . The new <term> model </term> achieved
model,25-4-H94-1014,bq </term> as compared to using a <term> trigram model </term> . This paper describes a method of
other,20-1-P06-2110,bq word vectors </term> in the <term> vector space model </term> . Through two experiments , three
model,27-3-N03-2006,bq the possibility of using the <term> language model </term> . We describe a simple <term> unsupervised
model,25-3-P05-1034,bq </term> , and train a <term> tree-based ordering model </term> . We describe an efficient <term> decoder
tech,9-6-P05-1067,bq time decoding algorithm </term> for the <term> model </term> . We evaluate the <term> outputs </term>
model,44-4-N04-4028,bq features </term> of the input in a <term> Markov model </term> . We implement several techniques
model,12-3-C04-1103,bq also called <term> n-gram transliteration model ( ngram TM ) </term> , is further proposed
model,7-7-J05-1003,bq log-likelihood </term> under a <term> baseline model </term> ( that of <term> Collins [ 1999 ] </term>
hide detail