#4604We approximateArabic 's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme).
other,11-1-P03-1051,ak
morphology
</term>
by a model that a
<term>
word
</term>
consists of a sequence of
<term>
morphemes
#4613We approximate Arabic's rich morphology by a model that aword consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme).
other,17-1-P03-1051,ak
word
</term>
consists of a sequence of
<term>
morphemes
</term>
in the
<term>
pattern
</term>
prefix
#4619We approximate Arabic's rich morphology by a model that a word consists of a sequence ofmorphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme).
other,20-1-P03-1051,ak
sequence of
<term>
morphemes
</term>
in the
<term>
pattern
</term>
prefix * - stem-suffix * ( * denotes
#4622We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in thepattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme).
other,35-1-P03-1051,ak
denotes zero or more occurrences of a
<term>
morpheme
</term>
) . Our method is seeded by a
<term>
#4637We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of amorpheme).
lr,6-2-P03-1051,ak
</term>
) . Our method is seeded by a
<term>
small manually segmented Arabic corpus
</term>
and uses it to bootstrap an
<term>
#4646Our method is seeded by asmall manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus.
tech,17-2-P03-1051,ak
</term>
and uses it to bootstrap an
<term>
unsupervised algorithm
</term>
to build the
<term>
Arabic word segmenter
#4657Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap anunsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus.
tech,22-2-P03-1051,ak
unsupervised algorithm
</term>
to build the
<term>
Arabic word segmenter
</term>
from a
<term>
large unsegmented Arabic
#4662Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build theArabic word segmenter from a large unsegmented Arabic corpus.
lr,27-2-P03-1051,ak
Arabic word segmenter
</term>
from a
<term>
large unsegmented Arabic corpus
</term>
. The
<term>
algorithm
</term>
uses a
#4667Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from alarge unsegmented Arabic corpus.
tech,1-3-P03-1051,ak
unsegmented Arabic corpus
</term>
. The
<term>
algorithm
</term>
uses a
<term>
trigram language model
#4673Thealgorithm uses a trigram language model to determine the most probable morpheme sequence for a given input.
model,4-3-P03-1051,ak
</term>
. The
<term>
algorithm
</term>
uses a
<term>
trigram language model
</term>
to determine the most probable
<term>
#4676The algorithm uses atrigram language model to determine the most probable morpheme sequence for a given input.
other,12-3-P03-1051,ak
</term>
to determine the most probable
<term>
morpheme sequence
</term>
for a given
<term>
input
</term>
. The
#4684The algorithm uses a trigram language model to determine the most probablemorpheme sequence for a given input.
other,17-3-P03-1051,ak
morpheme sequence
</term>
for a given
<term>
input
</term>
. The
<term>
language model
</term>
is
#4689The algorithm uses a trigram language model to determine the most probable morpheme sequence for a giveninput.
model,1-4-P03-1051,ak
for a given
<term>
input
</term>
. The
<term>
language model
</term>
is initially estimated from a
<term>
#4692Thelanguage model is initially estimated from a small manually segmented corpus of about 110,000 words.
lr,8-4-P03-1051,ak
</term>
is initially estimated from a
<term>
small manually segmented corpus
</term>
of about 110,000
<term>
words
</term>
#4699The language model is initially estimated from asmall manually segmented corpus of about 110,000 words.
other,15-4-P03-1051,ak
segmented corpus
</term>
of about 110,000
<term>
words
</term>
. To improve the
<term>
segmentation
#4706The language model is initially estimated from a small manually segmented corpus of about 110,000words.
measure(ment),3-5-P03-1051,ak
<term>
words
</term>
. To improve the
<term>
segmentation accuracy
</term>
, we use an
<term>
unsupervised algorithm
#4711To improve thesegmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.
tech,9-5-P03-1051,ak
segmentation accuracy
</term>
, we use an
<term>
unsupervised algorithm
</term>
for automatically acquiring new
<term>
#4717To improve the segmentation accuracy, we use anunsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.
other,15-5-P03-1051,ak
</term>
for automatically acquiring new
<term>
stems
</term>
from a
<term>
155 million word unsegmented
#4723To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring newstems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.
lr,18-5-P03-1051,ak
acquiring new
<term>
stems
</term>
from a
<term>
155 million word unsegmented corpus
</term>
, and re-estimate the
<term>
model
#4726To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.