tech,10-1-H01-1041,bq |
developing a
<term>
Korean-to-English machine
|
translation
|
system
</term><term>
CCLINC ( Common Coalition
|
#398
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
tool,1-2-H01-1041,bq |
</term>
. The
<term>
CCLINC Korean-to-English
|
translation
|
system
</term>
consists of two
<term>
core
|
#414
The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
tech,5-4-H01-1041,bq |
arguments
</term>
) . ( ii ) High quality
<term>
|
translation
|
</term>
via
<term>
word sense disambiguation
|
#482
(ii) High qualitytranslation via word sense disambiguation and accurate word order generation of the target language. |
other,18-6-H01-1041,bq |
the
<term>
system
</term>
produces the
<term>
|
translation
|
output
</term>
sufficient for content understanding
|
#533
Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces thetranslation output sufficient for content understanding of the original document. |
tech,30-1-H01-1042,bq |
to the
<term>
output
</term>
of
<term>
machine
|
translation
|
( MT ) systems
</term>
. We believe that
|
#575
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. |
other,18-2-H01-1042,bq |
language learning process
</term>
, the
<term>
|
translation
|
process
</term>
and the
<term>
development
</term>
|
#599
We believe that these evaluation techniques will provide information about both the human language learning process, thetranslation process and the development of machine translation systems. |
tech,24-2-H01-1042,bq |
the
<term>
development
</term>
of
<term>
machine
|
translation
|
systems
</term>
. This , the first experiment
|
#606
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. |
other,16-6-H01-1042,bq |
duplicating the experiment using
<term>
machine
|
translation
|
output
</term>
. Subjects were given a set
|
#679
We tested this to see if similar criteria could be elicited from duplicating the experiment using machine translation output. |
other,11-8-H01-1042,bq |
translations
</term>
, others were
<term>
machine
|
translation
|
outputs
</term>
. The subjects were given
|
#709
Some of the extracts were expert human translations, others were machine translation outputs. |
other,19-9-H01-1042,bq |
sample output to be an
<term>
expert human
|
translation
|
</term>
or a
<term>
machine translation
</term>
|
#733
The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation. |
other,24-9-H01-1042,bq |
human translation
</term>
or a
<term>
machine
|
translation
|
</term>
. Additionally , they were asked
|
#737
The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation. |
tech,23-1-P01-1004,bq |
<term>
retrieval performance
</term>
of a
<term>
|
translation
|
memory system
</term>
. We take a selection
|
#1484
In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of atranslation memory system. |
tech,4-3-P01-1007,bq |
complexity
</term>
. For example , after
<term>
|
translation
|
</term>
into an equivalent
<term>
RCG
</term>
|
#1659
For example, aftertranslation into an equivalent RCG, any tree adjoining grammar can be parsed in O(n6) time. |
model,4-1-N03-1017,bq |
% ) . We propose a new
<term>
phrase-based
|
translation
|
model
</term>
and
<term>
decoding algorithm
|
#2544
We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. |
model,21-1-N03-1017,bq |
, previously proposed
<term>
phrase-based
|
translation
|
models
</term>
. Within our framework , we
|
#2561
We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. |
lr,34-3-N03-1018,bq |
<term>
automatic extraction
</term>
of
<term>
|
translation
|
lexicons
</term>
from
<term>
printed text
</term>
|
#2778
We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction oftranslation lexicons from printed text. |
other,5-1-N03-2006,bq |
N-grams
</term>
. In order to boost the
<term>
|
translation
|
quality
</term>
of
<term>
EBMT
</term>
based
|
#3084
In order to boost thetranslation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, the language model of an in-domain monolingual corpus. |
tech,11-1-N03-2036,bq |
model
</term>
for
<term>
statistical machine
|
translation
|
</term>
that uses a much simpler set of
<term>
|
#3402
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
other,1-2-N03-2036,bq |
phrase-based models
</term>
. The
<term>
units of
|
translation
|
</term>
are
<term>
blocks
</term>
- pairs of
<term>
|
#3420
The units of translation are blocks - pairs of phrases. |
tech,6-2-P03-1050,bq |
</term>
is based on
<term>
statistical machine
|
translation
|
</term>
and it uses an
<term>
English stemmer
|
#4454
The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. |