model,8-1-P03-1051,bq |
Arabic 's rich morphology
</term>
by a
<term>
|
model
|
</term>
that a
<term>
word
</term>
consists
|
#4608
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,11-1-P03-1051,bq |
</term>
by a
<term>
model
</term>
that a
<term>
|
word
|
</term>
consists of a sequence of
<term>
morphemes
|
#4611
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,20-1-P03-1051,bq |
sequence of
<term>
morphemes
</term>
in the
<term>
|
pattern
|
</term>
<term>
prefix * - stem-suffix *
</term>
|
#4620
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |
other,35-1-P03-1051,bq |
denotes zero or more occurrences of a
<term>
|
morpheme
|
</term>
) . Our method is seeded by a small
|
#4635
We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme ). |
tech,1-3-P03-1051,bq |
unsegmented Arabic corpus
</term>
. The
<term>
|
algorithm
|
</term>
uses a
<term>
trigram language model
|
#4671
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
other,17-3-P03-1051,bq |
morpheme sequence
</term>
for a given
<term>
|
input
|
</term>
. The
<term>
language model
</term>
|
#4687
The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input . |
tech,3-5-P03-1051,bq |
<term>
words
</term>
. To improve the
<term>
|
segmentation
|
</term>
<term>
accuracy
</term>
, we use an
|
#4709
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
measure(ment),4-5-P03-1051,bq |
improve the
<term>
segmentation
</term>
<term>
|
accuracy
|
</term>
, we use an
<term>
unsupervised algorithm
|
#4710
To improve the segmentation accuracy , we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
other,20-5-P03-1051,bq |
<term>
stems
</term>
from a 155 million
<term>
|
word
|
</term>
<term>
unsegmented corpus
</term>
,
|
#4726
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
other,32-5-P03-1051,bq |
parameters
</term>
with the expanded
<term>
|
vocabulary
|
</term>
and
<term>
training corpus
</term>
.
|
#4738
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
tech,9-7-P03-1051,bq |
state-of-the-art performance and the
<term>
|
algorithm
|
</term>
can be used for many
<term>
highly
|
#4774
We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. |
other,30-7-P03-1051,bq |
manually segmented corpus
</term>
of the
<term>
|
language
|
</term>
of interest . A central problem
|
#4795
We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. |