other,2-1-P03-1051,bq |
stemmer
</term>
above . We approximate
<term>
|
Arabic
's rich morphology
|
</term>
by a
<term>
model
</term>
that a
<term>
|
#4602
We approximate Arabic 's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
model,8-1-P03-1051,bq |
Arabic 's rich morphology
</term>
by a
<term>
|
model
|
</term>
that a
<term>
word
</term>
consists
|
#4608
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,11-1-P03-1051,bq |
</term>
by a
<term>
model
</term>
that a
<term>
|
word
|
</term>
consists of a sequence of
<term>
morphemes
|
#4611
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,17-1-P03-1051,bq |
word
</term>
consists of a sequence of
<term>
|
morphemes
|
</term>
in the
<term>
pattern
</term><term>
|
#4617
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,20-1-P03-1051,bq |
sequence of
<term>
morphemes
</term>
in the
<term>
|
pattern
|
</term>
<term>
prefix * - stem-suffix *
</term>
|
#4620
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,21-1-P03-1051,bq |
</term>
in the
<term>
pattern
</term>
<term>
|
prefix
* - stem-suffix *
|
</term>
( * denotes zero or more occurrences
|
#4621
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix * - stem-suffix * (* denotes zero or more occurrences of a morpheme). |
other,35-1-P03-1051,bq |
denotes zero or more occurrences of a
<term>
|
morpheme
|
</term>
) . Our method is seeded by a small
|
#4635
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme ). |
lr,7-2-P03-1051,bq |
. Our method is seeded by a small
<term>
|
manually
segmented Arabic corpus
|
</term>
and uses it to bootstrap an
<term>
|
#4645
Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. |
tech,17-2-P03-1051,bq |
</term>
and uses it to bootstrap an
<term>
|
unsupervised
algorithm
|
</term>
to build the
<term>
Arabic word segmenter
|
#4655
Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. |
tech,22-2-P03-1051,bq |
unsupervised algorithm
</term>
to build the
<term>
|
Arabic
word segmenter
|
</term>
from a large
<term>
unsegmented Arabic
|
#4660
Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. |
lr,28-2-P03-1051,bq |
word segmenter
</term>
from a large
<term>
|
unsegmented
Arabic corpus
|
</term>
. The
<term>
algorithm
</term>
uses a
|
#4666
Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. |
tech,1-3-P03-1051,bq |
unsegmented Arabic corpus
</term>
. The
<term>
|
algorithm
|
</term>
uses a
<term>
trigram language model
|
#4671
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
model,4-3-P03-1051,bq |
. The
<term>
algorithm
</term>
uses a
<term>
|
trigram
language model
|
</term>
to determine the most probable
<term>
|
#4674
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
other,12-3-P03-1051,bq |
</term>
to determine the most probable
<term>
|
morpheme
sequence
|
</term>
for a given
<term>
input
</term>
. The
|
#4682
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
other,17-3-P03-1051,bq |
morpheme sequence
</term>
for a given
<term>
|
input
|
</term>
. The
<term>
language model
</term>
|
#4687
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input . |
model,1-4-P03-1051,bq |
for a given
<term>
input
</term>
. The
<term>
|
language
model
|
</term>
is initially estimated from a small
|
#4690
The language model is initially estimated from a small manually segmented corpus of about 110,000 words. |
lr,9-4-P03-1051,bq |
initially estimated from a small
<term>
|
manually
segmented corpus
|
</term>
of about 110,000
<term>
words
</term>
|
#4698
The language model is initially estimated from a small manually segmented corpus of about 110,000 words. |
other,15-4-P03-1051,bq |
segmented corpus
</term>
of about 110,000
<term>
|
words
|
</term>
. To improve the
<term>
segmentation
|
#4704
The language model is initially estimated from a small manually segmented corpus of about 110,000 words . |
tech,3-5-P03-1051,bq |
<term>
words
</term>
. To improve the
<term>
|
segmentation
|
</term>
<term>
accuracy
</term>
, we use an
|
#4709
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
measure(ment),4-5-P03-1051,bq |
improve the
<term>
segmentation
</term>
<term>
|
accuracy
|
</term>
, we use an
<term>
unsupervised algorithm
|
#4710
To improve the segmentation accuracy , we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |