other,5-1-N03-2006,bq N-grams </term> . In order to boost the <term> translation quality </term> of <term> EBMT </term> based on a small-sized
tech,8-1-N03-2006,bq <term> translation quality </term> of <term> EBMT </term> based on a small-sized <term> bilingual
lr,13-1-N03-2006,bq <term> EBMT </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual
lr,20-1-N03-2006,bq corpus </term> , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language
model,28-1-N03-2006,bq corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual
lr,33-1-N03-2006,bq language model </term> of an in-domain <term> monolingual corpus </term> . We conducted experiments with an
tech,5-2-N03-2006,bq . We conducted experiments with an <term> EBMT system </term> . The two <term> evaluation measures
measure(ment),2-3-N03-2006,bq an <term> EBMT system </term> . The two <term> evaluation measures </term> of the <term> BLEU score </term> and
measure(ment),6-3-N03-2006,bq <term> evaluation measures </term> of the <term> BLEU score </term> and the <term> NIST score </term> demonstrated
measure(ment),10-3-N03-2006,bq the <term> BLEU score </term> and the <term> NIST score </term> demonstrated the effect of using
lr,19-3-N03-2006,bq the effect of using an out-of-domain <term> bilingual corpus </term> and the possibility of using the <term>
model,27-3-N03-2006,bq </term> and the possibility of using the <term> language model </term> . We describe a simple <term> unsupervised
hide detail