other,18-1-P06-4014,bq |
The
<term>
LOGON MT demonstrator
</term>
assembles independently valuable
<term>
general-purpose NLP components
</term>
into a
<term>
machine translation pipeline
</term>
that capitalizes on
<term>
output
quality
</term>
.
|
#11807
The LOGON MT demonstrator assembles independently valuable general-purpose NLP components into a machine translation pipeline that capitalizes onoutput quality. |
tech,26-2-N03-1026,bq |
Our
<term>
system
</term>
incorporates a
<term>
linguistic parser/generator
</term>
for
<term>
LFG
</term>
, a
<term>
transfer component
</term>
for
<term>
parse reduction
</term>
operating on
<term>
packed parse forests
</term>
, and a
<term>
maximum-entropy model
</term>
for
<term>
stochastic
output
selection
</term>
.
|
#2835
Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. |
|
Second , the
<term>
sentence-plan-ranker ( SPR )
</term>
ranks the list of
output
<term>
sentence plans
</term>
, and then selects the top-ranked
<term>
plan
</term>
.
|
#1411
Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-ranked plan. |
other,18-6-H01-1041,bq |
Having been trained on
<term>
Korean newspaper articles
</term>
on missiles and chemical biological warfare , the
<term>
system
</term>
produces the
<term>
translation
output
</term>
sufficient for content understanding of the
<term>
original document
</term>
.
|
#534
Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document. |
|
We consider the case of
<term>
multi-document summarization
</term>
, where the input
<term>
documents
</term>
are in
<term>
Arabic
</term>
, and the
output
<term>
summary
</term>
is in
<term>
English
</term>
.
|
#7170
We consider the case of multi-document summarization, where the input documents are in Arabic, and the output summary is in English. |
|
The subjects were given three minutes per extract to determine whether they believed the sample
output
to be an
<term>
expert human translation
</term>
or a
<term>
machine translation
</term>
.
|
#727
The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation. |
|
The use of
<term>
BLEU
</term>
at the
<term>
character
</term>
level eliminates the
<term>
word segmentation problem
</term>
: it makes it possible to directly compare commercial
<term>
systems
</term>
outputting
<term>
unsegmented texts
</term>
with , for instance ,
<term>
statistical MT systems
</term>
which usually segment their
<term>
outputs
</term>
.
|
#7769
The use of BLEU at the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systemsoutputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs. |
other,12-4-P05-2016,bq |
We also refer to an
<term>
evaluation method
</term>
and plan to compare our
<term>
system 's
output
</term>
with a
<term>
benchmark system
</term>
.
|
#9825
We also refer to an evaluation method and plan to compare our system's output with a benchmark system. |