#4461The stemming model is based on statistical machine translation and it uses anEnglish stemmer and a small (10K sentences) parallel corpus as its sole training resources.
tech,11-4-P03-1050,ak
can be used to further improve the
<term>
stemmer
</term>
by allowing it to adapt to a desired
#4499Monolingual, unannotated text can be used to further improve thestemmer by allowing it to adapt to a desired domain or genre.
other,20-4-P03-1050,ak
allowing it to adapt to a desired
<term>
domain
</term>
or
<term>
genre
</term>
. Examples and
#4508Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desireddomain or genre.
tech,1-6-P03-1050,ak
needs
<term>
affix removal
</term>
. Our
<term>
resource-frugal approach
</term>
results in 87.5 %
<term>
agreement
</term>
#4535Ourresource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.
measure(ment),13-7-P03-1050,ak
indicates an improvement of 22-38 % in
<term>
average precision
</term>
over
<term>
unstemmed text
</term>
,
#4584Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% inaverage precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
measure(ment),7-6-P03-1050,ak
resource-frugal approach
</term>
results in 87.5 %
<term>
agreement
</term>
with a state of the art , proprietary
#4541Our resource-frugal approach results in 87.5%agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.
tech,16-6-P03-1050,ak
with a state of the art , proprietary
<term>
Arabic stemmer
</term>
built using
<term>
rules
</term>
,
<term>
#4550Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietaryArabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.
tech,10-1-P03-1050,ak
learning approach
</term>
to building a
<term>
non-English ( Arabic ) stemmer
</term>
. The
<term>
stemming model
</term>
is
#4442This paper presents an unsupervised learning approach to building anon-English ( Arabic ) stemmer.
tech,28-7-P03-1050,ak
the performance of the proprietary
<term>
stemmer
</term>
above . We approximate
<term>
Arabic
#4599Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietarystemmer above.
lr,0-4-P03-1050,ak
after the
<term>
training phase
</term>
.
<term>
Monolingual , unannotated text
</term>
can be used to further improve the
#4488No parallel text is needed after the training phase.Monolingual , unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre.
tech,6-2-P03-1050,ak
<term>
stemming model
</term>
is based on
<term>
statistical machine translation
</term>
and it uses an
<term>
English stemmer
#4454The stemming model is based onstatistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources.
tech,0-7-P03-1050,ak
<term>
unsupervised component
</term>
.
<term>
Task-based evaluation
</term>
using
<term>
Arabic information retrieval
#4571Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
lr,22-6-P03-1050,ak
</term>
built using
<term>
rules
</term>
,
<term>
affix lists
</term>
, and
<term>
human annotated text
</term>
#4556Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules,affix lists, and human annotated text, in addition to an unsupervised component.
model,1-2-P03-1050,ak
non-English ( Arabic ) stemmer
</term>
. The
<term>
stemming model
</term>
is based on
<term>
statistical machine
#4449Thestemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources.
tech,4-1-P03-1050,ak
users
</term>
. This paper presents an
<term>
unsupervised learning approach
</term>
to building a
<term>
non-English (
#4436This paper presents anunsupervised learning approach to building a non-English (Arabic) stemmer.
tech,3-7-P03-1050,ak
<term>
Task-based evaluation
</term>
using
<term>
Arabic information retrieval
</term>
indicates an improvement of 22-38
#4574Task-based evaluation usingArabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
other,16-7-P03-1050,ak
<term>
average precision
</term>
over
<term>
unstemmed text
</term>
, and 96 % of the performance of
#4587Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision overunstemmed text, and 96% of the performance of the proprietary stemmer above.
model,20-6-P03-1050,ak
<term>
Arabic stemmer
</term>
built using
<term>
rules
</term>
,
<term>
affix lists
</term>
, and
<term>
#4554Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built usingrules, affix lists, and human annotated text, in addition to an unsupervised component.
other,16-5-P03-1050,ak
the approach is applicable to any
<term>
language
</term>
that needs
<term>
affix removal
</term>
#4528Examples and results will be given for Arabic, but the approach is applicable to anylanguage that needs affix removal.
other,7-3-P03-1050,ak
parallel text
</term>
is needed after the
<term>
training phase
</term>
.
<term>
Monolingual , unannotated
#4485No parallel text is needed after thetraining phase.