tech,10-1-H01-1041,bq |
been developing a
<term>
Korean-to-English
|
machine
|
translation system
</term><term>
CCLINC (
|
#397
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
tech,30-1-H01-1042,bq |
</term>
, to the
<term>
output
</term>
of
<term>
|
machine
|
translation ( MT ) systems
</term>
. We believe
|
#574
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output ofmachine translation (MT) systems. |
tech,24-2-H01-1042,bq |
</term>
and the
<term>
development
</term>
of
<term>
|
machine
|
translation systems
</term>
. This , the
|
#605
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development ofmachine translation systems. |
other,16-6-H01-1042,bq |
from duplicating the experiment using
<term>
|
machine
|
translation output
</term>
. Subjects were
|
#678
We tested this to see if similar criteria could be elicited from duplicating the experiment usingmachine translation output. |
other,11-8-H01-1042,bq |
human translations
</term>
, others were
<term>
|
machine
|
translation outputs
</term>
. The subjects
|
#708
Some of the extracts were expert human translations, others weremachine translation outputs. |
other,24-9-H01-1042,bq |
expert human translation
</term>
or a
<term>
|
machine
|
translation
</term>
. Additionally , they
|
#736
The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or amachine translation. |
tech,28-4-H01-1055,bq |
</term>
can be overcome by employing
<term>
|
machine
|
learning techniques
</term>
. In this paper
|
#1023
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employingmachine learning techniques. |
tech,5-1-P01-1070,bq |
</term>
. We describe a set of
<term>
supervised
|
machine
|
learning
</term>
experiments centering on
|
#2130
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. |
tech,8-1-N03-1004,bq |
of
<term>
ensemble methods
</term>
in
<term>
|
machine
|
learning
</term>
and other areas of
<term>
|
#2314
Motivated by the success of ensemble methods inmachine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
tech,11-1-N03-2036,bq |
unigram model
</term>
for
<term>
statistical
|
machine
|
translation
</term>
that uses a much simpler
|
#3401
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
tech,6-2-P03-1050,bq |
model
</term>
is based on
<term>
statistical
|
machine
|
translation
</term>
and it uses an
<term>
English
|
#4453
The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. |
tech,4-1-C04-1035,bq |
difference . This paper presents a
<term>
|
machine
|
learning
</term>
approach to bare
<term>
sluice
|
#5153
This paper presents amachine learning approach to bare sluice disambiguation in dialogue. |
tech,26-3-C04-1035,bq |
dataset
</term>
, and run two different
<term>
|
machine
|
learning algorithms
</term>
:
<term>
SLIPPER
|
#5208
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two differentmachine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
tech,8-2-C04-1103,bq |
this paper , a novel framework for
<term>
|
machine
|
transliteration/back transliteration
</term>
|
#5751
In this paper, a novel framework formachine transliteration/back transliteration that allows us to carry out direct orthographical mapping (DOM) between two different languages is presented. |
tech,9-1-N04-1022,bq |
MBR ) decoding
</term>
for
<term>
statistical
|
machine
|
translation
</term>
. This statistical approach
|
#6553
We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. |
tech,1-4-N04-1024,bq |
structure
</term>
. A
<term>
support vector
|
machine
|
</term>
uses these
<term>
features
</term>
to
|
#6708
A support vector machine uses these features to capture breakdowns in coherence due to relatedness to the essay question and relatedness between discourse elements. |
other,16-1-H05-1005,bq |
multilingual input
</term>
to correct errors in
<term>
|
machine
|
translation
</term>
and thus improve the
|
#7141
In this paper, we use the information redundancy in multilingual input to correct errors inmachine translation and thus improve the quality of multilingual summaries. |
tech,6-4-H05-1005,bq |
</term>
. Further , the use of multiple
<term>
|
machine
|
translation systems
</term>
provides yet
|
#7202
Further, the use of multiplemachine translation systems provides yet more redundancy, yielding different ways to realize that information in English. |
other,6-5-H05-1005,bq |
. We demonstrate how errors in the
<term>
|
machine
|
translations
</term>
of the input
<term>
Arabic
|
#7226
We demonstrate how errors in themachine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases. |
tech,13-2-H05-1012,bq |
training material
</term>
for problems in
<term>
|
machine
|
translation
</term>
and that a mixture of
|
#7279
We demonstrate that it is feasible to create training material for problems inmachine translation and that a mixture of supervised and unsupervised methods yields superior performance. |