|
browsers
</term>
. At MIT Lincoln Laboratory , we
|
have
|
been developing a
<term>
Korean-to-English
|
#392
At MIT Lincoln Laboratory, we have been developing a Korean-to-English machine translation system CCLINC (Common Coalition Language System at Lincoln Laboratory). |
|
</term>
and
<term>
information sources
</term>
. We
|
have
|
built and will demonstrate an application
|
#815
We have built and will demonstrate an application of this approach called LCS-Marine. |
|
when a
<term>
request
</term>
is complete . We
|
have
|
demonstrated this capability in several
|
#888
We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains. |
|
Automatic Speech Recognition technology
</term>
|
have
|
put the goal of naturally sounding
<term>
|
#918
Recent advances in Automatic Speech Recognition technologyhave put the goal of naturally sounding dialog systems within reach. |
|
a
<term>
natural language generator
</term>
|
have
|
recently been proposed , but a fundamental
|
#2022
Techniques for automatically training modules of a natural language generatorhave recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
|
word-level alignment models
</term>
does not
|
have
|
a strong impact on performance . Learning
|
#2648
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
for
<term>
language understanding
</term>
and
|
have
|
a high accuracy but little robustness and
|
#3524
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
|
understanding process
</term>
. Experiment results
|
have
|
shown that a
<term>
system
</term>
that exploits
|
#4256
Experiment results have shown that a system that exploits the proposed method performs sufficiently and that holding multiple candidates for understanding results is effective. |
|
that
<term>
manually sense-tagged data
</term>
|
have
|
in their
<term>
sense coverage
</term>
. Our
|
#4909
On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged datahave in their sense coverage. |
|
formulate our
<term>
heuristic principles
</term>
|
have
|
significant
<term>
predictive power
</term>
|
#5255
The results show that the features in terms of which we formulate our heuristic principleshave significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
</term>
, noting that published results to date
|
have
|
not been comparable across
<term>
corpora
|
#5549
Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. |
|
probabilistic translation models
</term>
that
|
have
|
recently been adopted in the literature
|
#7446
This paper investigates some computational problems associated with probabilistic translation models that have recently been adopted in the literature on machine translation. |
|
contrary , current
<term>
SMT models
</term>
do
|
have
|
limitations in comparison with dedicated
|
#7963
This tends to support the view that despite recent speculative claims to the contrary, current SMT models do have limitations in comparison with dedicated WSD models, and that SMT should benefit from the better predictions made by the WSD models. |
|
the last few years dramatic improvements
|
have
|
been made , and a number of comparative
|
#8012
Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems, requiring significantly less development time. |
|
and a number of comparative evaluations
|
have
|
shown , that
<term>
SMT
</term>
gives competitive
|
#8022
Over the last few years dramatic improvements have been made, and a number of comparative evaluations have shown, that SMT gives competitive results to rule-based translation systems, requiring significantly less development time. |
|
Machine Translation ( SMT )
</term>
but which
|
have
|
not been addressed satisfactorily by the
|
#9945
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. |
|
a variety of
<term>
SMT algorithms
</term>
|
have
|
been built and empirically tested whereas
|
#9966
Over the last decade, a variety of SMT algorithmshave been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT. |
|
two ways . We first apply approaches that
|
have
|
been proposed for
<term>
predicting top-level
|
#10493
We first apply approaches that have been proposed for predicting top-level topic shifts to the problem of identifying subtopic boundaries. |
|
</term>
inevitable in
<term>
ASR output
</term>
|
have
|
a negative impact on models that combine
|
#10619
We also find that the transcription errors inevitable in ASR outputhave a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
|
placing
<term>
commas
</term>
. Finally , we
|
have
|
shown that these results can be improved
|
#11285
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |