lr,15-6-P03-1051,bq <term> exact match accuracy </term> on a <term> test corpus </term> containing 28,449 <term> word tokens
lr,21-5-P03-1051,bq from a 155 million <term> word </term><term> unsegmented corpus </term> , and re-estimate the <term> model
lr,25-7-P03-1051,bq provided that one can create a small <term> manually segmented corpus </term> of the <term> language </term> of interest
lr,28-2-P03-1051,bq word segmenter </term> from a large <term> unsegmented Arabic corpus </term> . The <term> algorithm </term> uses a
lr,34-5-P03-1051,bq expanded <term> vocabulary </term> and <term> training corpus </term> . The resulting <term> Arabic word
lr,7-2-P03-1051,bq . Our method is seeded by a small <term> manually segmented Arabic corpus </term> and uses it to bootstrap an <term>
lr,9-4-P03-1051,bq is initially estimated from a small <term> manually segmented corpus </term> of about 110,000 <term> words </term>
measure(ment),10-6-P03-1051,bq system </term> achieves around 97 % <term> exact match accuracy </term> on a <term> test corpus </term> containing
measure(ment),4-5-P03-1051,bq improve the <term> segmentation </term><term> accuracy </term> , we use an <term> unsupervised algorithm
model,1-4-P03-1051,bq for a given <term> input </term> . The <term> language model </term> is initially estimated from a small
model,4-3-P03-1051,bq </term> . The <term> algorithm </term> uses a <term> trigram language model </term> to determine the most probable <term>
model,8-1-P03-1051,bq Arabic 's rich morphology </term> by a <term> model </term> that a <term> word </term> consists of
other,11-1-P03-1051,bq </term> by a <term> model </term> that a <term> word </term> consists of a sequence of <term> morphemes
other,12-3-P03-1051,bq </term> to determine the most probable <term> morpheme sequence </term> for a given <term> input </term> . The
other,15-4-P03-1051,bq segmented corpus </term> of about 110,000 <term> words </term> . To improve the <term> segmentation
other,15-5-P03-1051,bq </term> for automatically acquiring new <term> stems </term> from a 155 million <term> word </term>
other,15-7-P03-1051,bq algorithm </term> can be used for many <term> highly inflected languages </term> provided that one can create a small
other,17-1-P03-1051,bq word </term> consists of a sequence of <term> morphemes </term> in the <term> pattern </term><term> prefix
other,17-3-P03-1051,bq morpheme sequence </term> for a given <term> input </term> . The <term> language model </term> is
other,19-6-P03-1051,bq test corpus </term> containing 28,449 <term> word tokens </term> . We believe this is a state-of-the-art
hide detail