#4556Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules,affix lists, and human annotated text, in addition to an unsupervised component.
tech,0-7-P03-1050,ak
<term>
unsupervised component
</term>
.
<term>
Task-based evaluation
</term>
using
<term>
Arabic information retrieval
#4571Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
lr,0-4-P03-1050,ak
after the
<term>
training phase
</term>
.
<term>
Monolingual , unannotated text
</term>
can be used to further improve the
#4488No parallel text is needed after the training phase.Monolingual , unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre.
measure(ment),7-6-P03-1050,ak
resource-frugal approach
</term>
results in 87.5 %
<term>
agreement
</term>
with a state of the art , proprietary
#4541Our resource-frugal approach results in 87.5%agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.
tech,10-1-P03-1050,ak
learning approach
</term>
to building a
<term>
non-English ( Arabic ) stemmer
</term>
. The
<term>
stemming model
</term>
is
#4442This paper presents an unsupervised learning approach to building anon-English ( Arabic ) stemmer.
tech,4-1-P03-1050,ak
users
</term>
. This paper presents an
<term>
unsupervised learning approach
</term>
to building a
<term>
non-English (
#4436This paper presents anunsupervised learning approach to building a non-English (Arabic) stemmer.
tech,34-6-P03-1050,ak
annotated text
</term>
, in addition to an
<term>
unsupervised component
</term>
.
<term>
Task-based evaluation
</term>
#4568Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to anunsupervised component.
tech,13-2-P03-1050,ak
machine translation
</term>
and it uses an
<term>
English stemmer
</term>
and a
<term>
small ( 10K sentences
#4461The stemming model is based on statistical machine translation and it uses anEnglish stemmer and a small (10K sentences) parallel corpus as its sole training resources.
lr,26-6-P03-1050,ak
</term>
,
<term>
affix lists
</term>
, and
<term>
human annotated text
</term>
, in addition to an
<term>
unsupervised
#4560Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, andhuman annotated text, in addition to an unsupervised component.
other,16-5-P03-1050,ak
the approach is applicable to any
<term>
language
</term>
that needs
<term>
affix removal
</term>
#4528Examples and results will be given for Arabic, but the approach is applicable to anylanguage that needs affix removal.
other,20-4-P03-1050,ak
allowing it to adapt to a desired
<term>
domain
</term>
or
<term>
genre
</term>
. Examples and
#4508Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desireddomain or genre.
other,7-5-P03-1050,ak
Examples and results will be given for
<term>
Arabic
</term>
, but the approach is applicable
#4519Examples and results will be given forArabic, but the approach is applicable to any language that needs affix removal.
measure(ment),13-7-P03-1050,ak
indicates an improvement of 22-38 % in
<term>
average precision
</term>
over
<term>
unstemmed text
</term>
,
#4584Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% inaverage precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
tech,19-5-P03-1050,ak
any
<term>
language
</term>
that needs
<term>
affix removal
</term>
. Our
<term>
resource-frugal approach
#4531Examples and results will be given for Arabic, but the approach is applicable to any language that needsaffix removal.
lr,1-3-P03-1050,ak
<term>
training resources
</term>
. No
<term>
parallel text
</term>
is needed after the
<term>
training
#4479Noparallel text is needed after the training phase.
tech,6-2-P03-1050,ak
<term>
stemming model
</term>
is based on
<term>
statistical machine translation
</term>
and it uses an
<term>
English stemmer
#4454The stemming model is based onstatistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources.
other,22-4-P03-1050,ak
to a desired
<term>
domain
</term>
or
<term>
genre
</term>
. Examples and results will be given
#4510Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain orgenre.
tech,1-6-P03-1050,ak
needs
<term>
affix removal
</term>
. Our
<term>
resource-frugal approach
</term>
results in 87.5 %
<term>
agreement
</term>
#4535Ourresource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.
other,16-7-P03-1050,ak
<term>
average precision
</term>
over
<term>
unstemmed text
</term>
, and 96 % of the performance of
#4587Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision overunstemmed text, and 96% of the performance of the proprietary stemmer above.
tech,16-6-P03-1050,ak
with a state of the art , proprietary
<term>
Arabic stemmer
</term>
built using
<term>
rules
</term>
,
<term>
#4550Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietaryArabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component.