lr,1-2-H92-1074,bq of the art in <term> CSR </term> . This <term> corpus </term> essentially supersedes the now old
lr,10-5-N03-2025,bq Markov Model </term> is trained on a <term> corpus </term> automatically tagged by the first
lr,11-4-P05-1074,bq extracted from a <term> bilingual parallel corpus </term> to be ranked using <term> translation
lr,12-2-P01-1008,bq identification of paraphrases </term> from a <term> corpus of multiple English translations </term>
lr,12-4-C92-1055,bq possible variations between the <term> training corpus </term> and the real tasks are also taken
lr,13-1-N03-2006,bq </term> based on a small-sized <term> bilingual corpus </term> , we use an out-of-domain <term> bilingual
lr,15-2-C90-3063,bq co-occurrence patterns </term> in a large <term> corpus </term> . To a large extent , these <term>
lr,15-6-P03-1051,bq exact match accuracy </term> on a <term> test corpus </term> containing 28,449 <term> word tokens
lr,16-6-H90-1060,bq adaptation ( SA ) </term> using the new <term> SI corpus </term> and a small amount of <term> speech
lr,17-4-C04-1116,bq context features </term> in each author 's <term> corpus </term> tend not to be <term> synonymous expressions
lr,18-4-P06-2001,bq using a bigger and a more homogeneous <term> corpus </term> to train , that is , a bigger <term>
lr,19-2-N03-4010,bq candidates </term> from the given <term> text corpus </term> . The operation of the <term> system
lr,19-3-N03-2006,bq of using an out-of-domain <term> bilingual corpus </term> and the possibility of using the <term>
lr,19-4-N03-1012,bq successfully classifies 73.2 % in a <term> German corpus </term> of 2.284 <term> SRHs </term> as either
lr,19-5-C90-3063,bq that were randomly selected from the <term> corpus </term> . The results of the experiment show
lr,19-5-J05-4003,bq starting with a very small <term> parallel corpus </term> ( 100,000 <term> words </term> ) and
lr,2-3-I05-4010,bq in detail . The resultant <term> bilingual corpus </term> , 10.4 M <term> English words </term>
lr,20-1-N03-2006,bq , we use an out-of-domain <term> bilingual corpus </term> and , in addition , the <term> language
lr,21-5-P03-1051,bq million <term> word </term><term> unsegmented corpus </term> , and re-estimate the <term> model
lr,22-2-P03-1050,bq a small ( 10K sentences ) <term> parallel corpus </term> as its sole <term> training resources
hide detail