tech,11-5-H01-1058,bq We provide experimental results that clearly show the need for a <term> dynamic language model combination </term> to improve the <term> performance </term> further .
model,12-3-N03-1001,bq In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
model,26-3-N03-1001,bq In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
model,4-1-N03-1017,bq We propose a new <term> phrase-based translation model </term> and <term> decoding algorithm </term> that enables us to evaluate and compare several , previously proposed <term> phrase-based translation models </term> .
model,7-1-N03-1018,bq In this paper , we introduce a <term> generative probabilistic optical character recognition ( OCR ) model </term> that describes an end-to-end process in the <term> noisy channel framework </term> , progressing from generation of <term> true text </term> through its transformation into the <term> noisy output </term> of an <term> OCR system </term> .
model,1-2-N03-1018,bq The <term> model </term> is designed for use in <term> error correction </term> , with a focus on <term> post-processing </term> the <term> output </term> of black-box <term> OCR systems </term> in order to make it more useful for <term> NLP tasks </term> .
model,6-3-N03-1018,bq We present an implementation of the <term> model </term> based on <term> finite-state models </term> , demonstrate the <term> model </term> 's ability to significantly reduce <term> character and word error rate </term> , and provide evaluation results involving <term> automatic extraction </term> of <term> translation lexicons </term> from <term> printed text </term> .
model,14-3-N03-1018,bq We present an implementation of the <term> model </term> based on <term> finite-state models </term> , demonstrate the <term> model </term> 's ability to significantly reduce <term> character and word error rate </term> , and provide evaluation results involving <term> automatic extraction </term> of <term> translation lexicons </term> from <term> printed text </term> .
model,23-2-N03-1026,bq Our <term> system </term> incorporates a <term> linguistic parser/generator </term> for <term> LFG </term> , a <term> transfer component </term> for <term> parse reduction </term> operating on <term> packed parse forests </term> , and a <term> maximum-entropy model </term> for <term> stochastic output selection </term> .
model,28-1-N03-2006,bq In order to boost the <term> translation quality </term> of <term> EBMT </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual corpus </term> .
model,27-3-N03-2006,bq The two <term> evaluation measures </term> of the <term> BLEU score </term> and the <term> NIST score </term> demonstrated the effect of using an out-of-domain <term> bilingual corpus </term> and the possibility of using the <term> language model </term> .
model,7-1-N03-2036,bq In this paper , we describe a <term> phrase-based unigram model </term> for <term> statistical machine translation </term> that uses a much simpler set of <term> model parameters </term> than similar <term> phrase-based models </term> .
other,21-1-N03-2036,bq In this paper , we describe a <term> phrase-based unigram model </term> for <term> statistical machine translation </term> that uses a much simpler set of <term> model parameters </term> than similar <term> phrase-based models </term> .
model,6-3-N03-2036,bq During <term> decoding </term> , we use a <term> block unigram model </term> and a <term> word-based trigram language model </term> .
model,11-3-N03-2036,bq During <term> decoding </term> , we use a <term> block unigram model </term> and a <term> word-based trigram language model </term> .
model,16-2-P03-1033,bq Unlike previous studies that focus on <term> user </term> 's <term> knowledge </term> or typical kinds of <term> users </term> , the <term> user model </term> we propose is more comprehensive .
model,1-2-P03-1050,bq The <term> stemming model </term> is based on <term> statistical machine translation </term> and it uses an <term> English stemmer </term> and a small ( 10K sentences ) <term> parallel corpus </term> as its sole <term> training resources </term> .
model,8-1-P03-1051,bq We approximate <term> Arabic 's rich morphology </term> by a <term> model </term> that a <term> word </term> consists of a sequence of <term> morphemes </term> in the <term> pattern </term><term> prefix * - stem-suffix * </term> ( * denotes zero or more occurrences of a <term> morpheme </term> ) .
model,4-3-P03-1051,bq The <term> algorithm </term> uses a <term> trigram language model </term> to determine the most probable <term> morpheme sequence </term> for a given <term> input </term> .
model,1-4-P03-1051,bq The <term> language model </term> is initially estimated from a small <term> manually segmented corpus </term> of about 110,000 <term> words </term> .
hide detail