#386We also report results of a preliminary, qualitative user evaluation of the system, which while broadly positive indicates further work needs to be done on the interface to make users aware of the increased potential of IE-enhanced text browsers. At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory).
tech,10-1-H01-1041,ak
CCLINC ( Common Coalition Language System
at
Lincoln Laboratory )
</term>
. The
<term>
CCLINC
#406At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory).
experiment in a series of experiments , looks
at
the
<term>
intelligibility
</term>
of
<term>
#621This, the first experiment in a series of experiments, looks at the intelligibility of MT output.
were asked to mark the
<term>
word
</term>
at
which they made this decision . The results
#748Additionally, they were asked to mark the wordat which they made this decision.
, intelligent agent
</term>
for execution
at
the appropriate
<term>
database
</term>
.
<term>
#859The request is passed to a mobile, intelligent agent for execution at the appropriate database.
them , they need to be more sophisticated
at
responding to the
<term>
user
</term>
. The
#962However, the improved speech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user.
results from the
<term>
answering agents
</term>
at
the
<term>
question , passage , and/or answer
#2387We present our multi-level answer resolution algorithm that combines results from the answering agentsat the question, passage, and/or answer levels.
information system that has been developed
at
our laboratory . Experimental evaluation
#4401Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory.
performance of a
<term>
summarizer
</term>
,
at
times giving it a significant lead over
#5419It is found that the Bayesian approach generally leverages performance of a summarizer, at times giving it a significant lead over non-Bayesian models.
<term>
word n-grams
</term>
and its application
at
the
<term>
character level
</term>
. The use
#6284This study establishes the equivalence between the standard use of BLEU in word n-grams and its application at the character level.
level
</term>
. The use of
<term>
BLEU
</term>
at
the
<term>
character level
</term>
eliminates
#6293The use of BLEUat the character level eliminates the word segmentation problem: it makes it possible to directly compare commercial systems outputting unsegmented texts with, for instance, statistical MT systems which usually segment their outputs.
<term>
Senseval series of workshops
</term>
.
At
the same time , the recent improvements
#6399Much effort has been put in designing and evaluating dedicated word sense disambiguation (WSD) models, in particular with the Senseval series of workshops. At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences.
suggests that
<term>
SMT models
</term>
are good
at
predicting the right
<term>
translation
</term>
#6424At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences.
dependency tree
</term>
due to their weakness
at
resolving the
<term>
right-side dependencies
#6669Previous works on shift-reduce dependency parsers may not guarantee the connectivity of a dependency tree due to their weakness at resolving the right-side dependencies.
( i ) their
<term>
grammaticality
</term>
:
at
least 99 % correct sentences ; ( ii ) their
#7610We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets.
equivalence
</term>
in
<term>
meaning
</term>
:
at
least 96 % correct
<term>
paraphrases
</term>
#7625We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets.