tech,11-2-H01-1041,bq |
consists of two
<term>
core modules
</term>
,
<term>
|
language
|
understanding and generation modules
</term>
|
#422
The CCLINC Korean-to-English translation system consists of two core modules,language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
other,22-1-H01-1042,bq |
the
<term>
evaluation
</term>
of
<term>
human
|
language
|
learners
</term>
, to the
<term>
output
</term>
|
#567
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. |
tech,3-2-H01-1049,bq |
sources
</term>
. We integrate a
<term>
spoken
|
language
|
understanding system
</term>
with
<term>
intelligent
|
#799
We integrate a spoken language understanding system with intelligent mobile agents that mediate between users and information sources. |
other,13-3-H01-1055,bq |
extensively studied by the
<term>
natural
|
language
|
generation community
</term>
, though rarely
|
#982
The issue of system response to users has been extensively studied by the natural language generation community, though rarely in the context of dialog systems. |
model,11-1-H01-1058,bq |
address the problem of combining several
<term>
|
language
|
models ( LMs )
</term>
. We find that simple
|
#1038
In this paper, we address the problem of combining severallanguage models (LMs). |
tech,17-1-H01-1070,bq |
key prediction
</term>
and
<term>
Thai-English
|
language
|
identification
</term>
. The paper also proposes
|
#1259
This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification. |
other,10-5-P01-1007,bq |
of the
<term>
main parser
</term>
for a
<term>
|
language
|
L
</term>
are directed by a
<term>
guide
</term>
|
#1710
The non-deterministic parsing choices of the main parser for alanguage L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
tech,6-1-P01-1008,bq |
interpretation and generation of natural
|
language
|
</term>
, current systems use manual or semi-automatic
|
#1764
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
tech,14-2-P01-1009,bq |
attention
</term>
, yet present
<term>
natural
|
language
|
search engines
</term>
perform poorly on
<term>
|
#1861
These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. |
tech,7-1-P01-1056,bq |
training
</term>
modules of a
<term>
natural
|
language
|
generator
</term>
have recently been proposed
|
#2020
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
other,11-4-N03-1001,bq |
evaluated on three different
<term>
spoken
|
language
|
system domains
</term>
. Motivated by the
|
#2302
The classification accuracy of the method is evaluated on three different spoken language system domains. |
tech,14-1-N03-1004,bq |
learning
</term>
and other areas of
<term>
natural
|
language
|
processing
</term>
, we developed a
<term>
|
#2321
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
other,9-3-N03-1017,bq |
results , which hold for all examined
<term>
|
language
|
pairs
</term>
, suggest that the highest
|
#2597
Our empirical results, which hold for all examinedlanguage pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
tech,6-1-N03-2003,bq |
<term>
training data
</term>
suitable for
<term>
|
language
|
modeling
</term>
of
<term>
conversational speech
|
#3020
Sources of training data suitable forlanguage modeling of conversational speech are limited. |
model,28-1-N03-2006,bq |
corpus
</term>
and , in addition , the
<term>
|
language
|
model
</term>
of an in-domain
<term>
monolingual
|
#3107
In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, thelanguage model of an in-domain monolingual corpus. |
model,11-3-N03-2036,bq |
model
</term>
and a
<term>
word-based trigram
|
language
|
model
</term>
. During
<term>
training
</term>
|
#3441
During decoding, we use a block unigram model and a word-based trigram language model. |
tech,11-1-N03-3010,bq |
Cooperative Model
</term>
for
<term>
natural
|
language
|
understanding
</term>
in a
<term>
dialogue
|
#3489
In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. |
tech,27-2-N03-4004,bq |
languages
</term>
by leveraging
<term>
human
|
language
|
technology
</term>
. The
<term>
JAVELIN system
|
#3632
It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology. |
tech,13-1-N03-4010,bq |
architecture
</term>
with a variety of
<term>
|
language
|
processing modules
</term>
to provide an
<term>
|
#3648
The JAVELIN system integrates a flexible, planning-based architecture with a variety oflanguage processing modules to provide an open-domain question answering capability on free text. |
other,13-1-P03-1005,bq |
Kernel
</term>
for
<term>
structured natural
|
language
|
data
</term>
. The
<term>
HDAG Kernel
</term>
|
#3804
This paper proposes the Hierarchical Directed Acyclic Graph (HDAG) Kernel for structured natural language data. |