lr,12-2-P01-1008,bq We present an <term> unsupervised learning algorithm </term> for <term> identification of paraphrases </term> from a <term> corpus of multiple English translations </term> of the same <term> source text </term> .
lr,19-4-N03-1012,bq An evaluation of our <term> system </term> against the <term> annotated data </term> shows that , it successfully classifies 73.2 % in a <term> German corpus </term> of 2.284 <term> SRHs </term> as either coherent or incoherent ( given a <term> baseline </term> of 54.55 % ) .
lr,13-1-N03-2006,bq In order to boost the <term> translation quality </term> of <term> EBMT </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual corpus </term> .
lr,20-1-N03-2006,bq In order to boost the <term> translation quality </term> of <term> EBMT </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual corpus </term> .
lr,33-1-N03-2006,bq In order to boost the <term> translation quality </term> of <term> EBMT </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language model </term> of an in-domain <term> monolingual corpus </term> .
lr,19-3-N03-2006,bq The two <term> evaluation measures </term> of the <term> BLEU score </term> and the <term> NIST score </term> demonstrated the effect of using an out-of-domain <term> bilingual corpus </term> and the possibility of using the <term> language model </term> .
lr,10-5-N03-2025,bq Then , a <term> Hidden Markov Model </term> is trained on a <term> corpus </term> automatically tagged by the first <term> learner </term> .
lr,19-2-N03-4010,bq The demonstration will focus on how <term> JAVELIN </term> processes <term> questions </term> and retrieves the most likely <term> answer candidates </term> from the given <term> text corpus </term> .
other,15-1-P03-1009,bq Previous research has demonstrated the utility of <term> clustering </term> in inducing <term> semantic verb classes </term> from undisambiguated <term> corpus data </term> .
lr,22-2-P03-1050,bq The <term> stemming model </term> is based on <term> statistical machine translation </term> and it uses an <term> English stemmer </term> and a small ( 10K sentences ) <term> parallel corpus </term> as its sole <term> training resources </term> .
lr,7-2-P03-1051,bq Our method is seeded by a small <term> manually segmented Arabic corpus </term> and uses it to bootstrap an <term> unsupervised algorithm </term> to build the <term> Arabic word segmenter </term> from a large <term> unsegmented Arabic corpus </term> .
lr,28-2-P03-1051,bq Our method is seeded by a small <term> manually segmented Arabic corpus </term> and uses it to bootstrap an <term> unsupervised algorithm </term> to build the <term> Arabic word segmenter </term> from a large <term> unsegmented Arabic corpus </term> .
lr,9-4-P03-1051,bq The <term> language model </term> is initially estimated from a small <term> manually segmented corpus </term> of about 110,000 <term> words </term> .
lr,21-5-P03-1051,bq To improve the <term> segmentation </term><term> accuracy </term> , we use an <term> unsupervised algorithm </term> for automatically acquiring new <term> stems </term> from a 155 million <term> word </term><term> unsegmented corpus </term> , and re-estimate the <term> model parameters </term> with the expanded <term> vocabulary </term> and <term> training corpus </term> .
lr,34-5-P03-1051,bq To improve the <term> segmentation </term><term> accuracy </term> , we use an <term> unsupervised algorithm </term> for automatically acquiring new <term> stems </term> from a 155 million <term> word </term><term> unsegmented corpus </term> , and re-estimate the <term> model parameters </term> with the expanded <term> vocabulary </term> and <term> training corpus </term> .
lr,15-6-P03-1051,bq The resulting <term> Arabic word segmentation system </term> achieves around 97 % <term> exact match accuracy </term> on a <term> test corpus </term> containing 28,449 <term> word tokens </term> .
lr,25-7-P03-1051,bq We believe this is a state-of-the-art performance and the <term> algorithm </term> can be used for many <term> highly inflected languages </term> provided that one can create a small <term> manually segmented corpus </term> of the <term> language </term> of interest .
lr,9-1-P03-1068,bq We describe the ongoing construction of a large , <term> semantically annotated corpus </term> resource as reliable basis for the large-scale <term> acquisition of word-semantic information </term> , e.g. the construction of <term> domain-independent lexica </term> .
lr,6-3-C04-1106,bq We report experiments conducted on a <term> multilingual corpus </term> to estimate the number of <term> analogies </term> among the <term> sentences </term> that it contains .
lr,23-2-C04-1116,bq This paper proposes a new methodology to improve the <term> accuracy </term> of a <term> term aggregation system </term> using each author 's text as a coherent <term> corpus </term> .
hide detail