#398At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory).
tech,1-2-H01-1041,ak
</term>
. The
<term>
CCLINC Korean-to-English
translation
system
</term>
consists of two
<term>
core
#414The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame.
tech,5-4-H01-1041,ak
arguments
</term>
) . ( ii ) High quality
<term>
translation
</term>
via
<term>
word sense disambiguation
#482(ii) High qualitytranslation via word sense disambiguation and accurate word order generation of the target language.
other,18-6-H01-1041,ak
the
<term>
system
</term>
produces the
<term>
translation
output
</term>
sufficient for content understanding
#533Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces thetranslation output sufficient for content understanding of the original document.
tech,30-1-H01-1042,ak
to the
<term>
output
</term>
of
<term>
machine
translation
( MT ) systems
</term>
. We believe that
#575The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems.
other,18-2-H01-1042,ak
language learning process
</term>
, the
<term>
translation
process
</term>
and the development of
<term>
#599We believe that these evaluation techniques will provide information about both the human language learning process, thetranslation process and the development of machine translation systems.
tech,24-2-H01-1042,ak
</term>
and the development of
<term>
machine
translation
systems
</term>
. This , the first experiment
#606We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems.
other,16-6-H01-1042,ak
duplicating the experiment using
<term>
machine
translation
output
</term>
. Subjects were given a set
#679We tested this to see if similar criteria could be elicited from duplicating the experiment using machine translation output.
other,5-8-H01-1042,ak
Some of the extracts were
<term>
expert human
translations
</term>
, others were
<term>
machine translation
#704Some of the extracts were expert human translations, others were machine translation outputs.
other,11-8-H01-1042,ak
translations
</term>
, others were
<term>
machine
translation
outputs
</term>
. The subjects were given
#709Some of the extracts were expert human translations, others were machine translation outputs.
other,19-9-H01-1042,ak
sample output to be an
<term>
expert human
translation
</term>
or a
<term>
machine translation
</term>
#733The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation.
other,24-9-H01-1042,ak
human translation
</term>
or a
<term>
machine
translation
</term>
. Additionally , they were asked
#737The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation.
tech,23-1-P01-1004,ak
<term>
retrieval performance
</term>
of a
<term>
translation
memory system
</term>
. We take a selection
#1484In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of atranslation memory system.
tech,4-3-P01-1007,ak
complexity
</term>
. For example , after
<term>
translation
</term>
into an equivalent
<term>
RCG
</term>
#1659For example, aftertranslation into an equivalent RCG, any tree adjoining grammar can be parsed in O(n6) time.
lr,12-2-P01-1008,ak
</term>
from a
<term>
corpus of multiple English
translations
</term>
of the same
<term>
source text
</term>
#1794We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text.
model,4-1-N03-1017,ak
% ) . We propose a new
<term>
phrase-based
translation
model
</term>
and
<term>
decoding algorithm
#2545We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models.
model,21-1-N03-1017,ak
, previously proposed
<term>
phrase-based
translation
models
</term>
. Within our framework , we
#2562We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models.
other,30-3-N03-1017,ak
<term>
heuristic learning
</term>
of
<term>
phrase
translations
</term>
from
<term>
word-based alignments
</term>
#2620Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations.
other,39-3-N03-1017,ak
<term>
lexical weighting
</term>
of
<term>
phrase
translations
</term>
. Surprisingly , learning
<term>
phrases
#2629Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations.
lr,34-3-N03-1018,ak
<term>
automatic extraction
</term>
of
<term>
translation
lexicons
</term>
from printed
<term>
text
</term>
#2779We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction oftranslation lexicons from printed text.