|
At MIT Lincoln Laboratory , we
have
been developing a
<term>
Korean-to-English machine translation system
</term><term>
CCLINC ( Common Coalition Language System at Lincoln Laboratory )
</term>
.
|
#392
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
|
We
have
built and will demonstrate an application of this approach called
<term>
LCS-Marine
</term>
.
|
#815
We have built and will demonstrate an application of this approach called LCS-Marine. |
|
We
have
demonstrated this capability in several field exercises with the Marines and are currently developing applications of this
<term>
technology
</term>
in
<term>
new domains
</term>
.
|
#888
We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains. |
|
Recent advances in
<term>
Automatic Speech Recognition technology
</term>
have
put the goal of naturally sounding
<term>
dialog systems
</term>
within reach .
|
#918
Recent advances in Automatic Speech Recognition technologyhave put the goal of naturally sounding dialog systems within reach. |
|
<term>
Techniques for automatically training
</term>
modules of a
<term>
natural language generator
</term>
have
recently been proposed , but a fundamental concern is whether the
<term>
quality
</term>
of
<term>
utterances
</term>
produced with
<term>
trainable components
</term>
can compete with
<term>
hand-crafted template-based or rule-based approaches
</term>
.
|
#2022
Techniques for automatically training modules of a natural language generatorhave recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
|
Surprisingly , learning
<term>
phrases
</term>
longer than three
<term>
words
</term>
and learning
<term>
phrases
</term>
from
<term>
high-accuracy word-level alignment models
</term>
does not
have
a strong impact on performance .
|
#2648
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
<term>
FSM
</term>
provides two strategies for
<term>
language understanding
</term>
and
have
a high accuracy but little robustness and flexibility .
|
#3524
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
|
Experiment results
have
shown that a
<term>
system
</term>
that exploits the proposed
<term>
method
</term>
performs sufficiently and that holding multiple
<term>
candidates
</term>
for
<term>
understanding
</term>
results is effective .
|
#4256
Experiment results have shown that a system that exploits the proposed method performs sufficiently and that holding multiple candidates for understanding results is effective. |
|
On a subset of the most difficult
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
accuracy
</term>
difference between the two approaches is only 14.0 % , and the difference could narrow further to 6.5 % if we disregard the advantage that
<term>
manually sense-tagged data
</term>
have
in their
<term>
sense coverage
</term>
.
|
#4909
On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged datahave in their sense coverage. |
|
The results show that the
<term>
features
</term>
in terms of which we formulate our
<term>
heuristic principles
</term>
have
significant
<term>
predictive power
</term>
, and that
<term>
rules
</term>
that closely resemble our
<term>
Horn clauses
</term>
can be learnt automatically from these
<term>
features
</term>
.
|
#5255
The results show that the features in terms of which we formulate our heuristic principleshave significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
Along the way , we present the first comprehensive comparison of
<term>
unsupervised methods for part-of-speech tagging
</term>
, noting that published results to date
have
not been comparable across
<term>
corpora
</term>
or
<term>
lexicons
</term>
.
|
#5549
Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. |
|
This paper investigates some
<term>
computational problems
</term>
associated with
<term>
probabilistic translation models
</term>
that
have
recently been adopted in the literature on
<term>
machine translation
</term>
.
|
#7446
This paper investigates some computational problems associated with probabilistic translation models that have recently been adopted in the literature on machine translation. |
|
This tends to support the view that despite recent speculative claims to the contrary , current
<term>
SMT models
</term>
do
have
limitations in comparison with dedicated
<term>
WSD models
</term>
, and that
<term>
SMT
</term>
should benefit from the better predictions made by the
<term>
WSD models
</term>
.
|
#7963
This tends to support the view that despite recent speculative claims to the contrary, current SMT models do have limitations in comparison with dedicated WSD models, and that SMT should benefit from the better predictions made by the WSD models. |
|
Over the last few years dramatic improvements
have
been made , and a number of comparative evaluations have shown , that
<term>
SMT
</term>
gives competitive results to
<term>
rule-based translation systems
</term>
, requiring significantly less development time .
|
#8012
Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems, requiring significantly less development time. |
|
Over the last few years dramatic improvements have been made , and a number of comparative evaluations
have
shown , that
<term>
SMT
</term>
gives competitive results to
<term>
rule-based translation systems
</term>
, requiring significantly less development time .
|
#8022
Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems, requiring significantly less development time. |
|
In this paper we study a set of problems that are of considerable importance to
<term>
Statistical Machine Translation ( SMT )
</term>
but which
have
not been addressed satisfactorily by the
<term>
SMT research community
</term>
.
|
#9945
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. |
|
Over the last decade , a variety of
<term>
SMT algorithms
</term>
have
been built and empirically tested whereas little is known about the
<term>
computational complexity
</term>
of some of the fundamental problems of
<term>
SMT
</term>
.
|
#9966
Over the last decade, a variety of SMT algorithmshave been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT. |
|
We first apply approaches that
have
been proposed for
<term>
predicting top-level topic shifts
</term>
to the problem of
<term>
identifying subtopic boundaries
</term>
.
|
#10493
We first apply approaches that have been proposed for predicting top-level topic shifts to the problem of identifying subtopic boundaries. |
|
We also find that the
<term>
transcription errors
</term>
inevitable in
<term>
ASR output
</term>
have
a negative impact on models that combine
<term>
lexical-cohesion and conversational features
</term>
, but do not change the general preference of approach for the two tasks .
|
#10619
We also find that the transcription errors inevitable in ASR outputhave a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
|
Finally , we
have
shown that these results can be improved using a bigger and a more homogeneous
<term>
corpus
</term>
to train , that is , a bigger
<term>
corpus
</term>
written by one unique
<term>
author
</term>
.
|
#11285
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |