consists of two
<term>
core modules
</term>
,
<term>
language
understanding and generation modules
</term>
#422The CCLINC Korean-to-English translation system consists of two core modules,language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame.
tech,19-2-H01-1041,ak
generation modules
</term>
mediated by a
<term>
language
neutral meaning representation
</term>
called
#430The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by alanguage neutral meaning representation called a semantic frame.
other,18-3-H01-1041,ak
of
<term>
Korean
</term>
( a
<term>
verb final
language
</term>
with
<term>
overt case markers
</term>
#459The key features of the system include: (i) Robust efficient parsing of Korean (a verb final language with overt case markers, relatively free word order, and frequent omissions of arguments).
other,17-4-H01-1041,ak
order generation
</term>
of the
<term>
target
language
</term>
. ( iii ) Rapid
<term>
system development
#495(ii) High quality translation via word sense disambiguation and accurate word order generation of the target language.
other,22-1-H01-1042,ak
the
<term>
evaluation
</term>
of
<term>
human
language
learners
</term>
, to the
<term>
output
</term>
#567The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems.
other,12-2-H01-1042,ak
provide information about both the
<term>
human
language
learning process
</term>
, the
<term>
translation
#594We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems.
other,1-4-H01-1042,ak
</term>
of
<term>
MT output
</term>
. A
<term>
language
learning experiment
</term>
showed that
<term>
#629Alanguage learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words.
other,9-4-H01-1042,ak
differentiate
<term>
native from non-native
language
essays
</term>
in less than 100
<term>
words
#640A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words.
tech,3-2-H01-1049,ak
sources
</term>
. We integrate a
<term>
spoken
language
understanding system
</term>
with
<term>
intelligent
#799We integrate a spoken language understanding system with intelligent mobile agents that mediate between users and information sources.
other,13-3-H01-1055,ak
extensively studied by the
<term>
natural
language
generation community
</term>
, though rarely
#982The issue of system response to users has been extensively studied by the natural language generation community, though rarely in the context of dialog systems.
model,11-1-H01-1058,ak
address the problem of combining several
<term>
language
models ( LMs )
</term>
. We find that simple
#1038In this paper, we address the problem of combining severallanguage models (LMs).
tech,11-5-H01-1058,ak
clearly show the need for a
<term>
dynamic
language
model combination
</term>
to improve the
<term>
#1143We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further.
tech,17-1-H01-1070,ak
key prediction
</term>
and
<term>
Thai-English
language
identification
</term>
. The paper also proposes
#1259This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification.
tech,10-3-H01-1070,ak
than 99 %
<term>
accuracy
</term>
in both
<term>
language
identification
</term>
and
<term>
key prediction
#1287Our algorithm reported more than 99% accuracy in bothlanguage identification and key prediction.
other,3-2-P01-1007,ak
In particular ,
<term>
range concatenation
languages
[ RCL ]
</term>
can be parsed in
<term>
polynomial
#1626In particular, range concatenation languages [RCL] can be parsed in polynomial time and many classical grammatical formalisms can be translated into equivalent RCGs without increasing their worst-case parsing time complexity.
other,10-5-P01-1007,ak
of the
<term>
main parser
</term>
for a
<term>
language
L
</term>
are directed by a guide which uses
#1710The non-deterministic parsing choices of the main parser for alanguage L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L .
tech,6-1-P01-1008,ak
interpretation and generation of natural
language
</term>
, current systems use
<term>
manual
#1765While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases.
tech,14-2-P01-1009,ak
serious attention , yet present
<term>
natural
language
search engines
</term>
perform poorly on
<term>
#1862These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them.
tech,12-4-P01-1009,ak
operational semantics
</term>
of
<term>
natural
language
applications
</term>
improve , even larger
#1917The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible.
tech,7-1-P01-1056,ak
automatically training modules of a
<term>
natural
language
generator
</term>
have recently been proposed
#2021Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches.