tech,11-2-H01-1041,bq |
The
<term>
CCLINC Korean-to-English translation system
</term>
consists of two
<term>
core modules
</term>
,
<term>
language
understanding and generation modules
</term>
mediated by a
<term>
language neutral meaning representation
</term>
called a
<term>
semantic frame
</term>
.
|
#422
The CCLINC Korean-to-English translation system consists of two core modules,language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
other,22-1-H01-1042,bq |
The purpose of this research is to test the efficacy of applying
<term>
automated evaluation techniques
</term>
, originally devised for the
<term>
evaluation
</term>
of
<term>
human
language
learners
</term>
, to the
<term>
output
</term>
of
<term>
machine translation ( MT ) systems
</term>
.
|
#567
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. |
tech,3-2-H01-1049,bq |
We integrate a
<term>
spoken
language
understanding system
</term>
with
<term>
intelligent mobile agents
</term>
that mediate between
<term>
users
</term>
and
<term>
information sources
</term>
.
|
#799
We integrate a spoken language understanding system with intelligent mobile agents that mediate between users and information sources. |
other,13-3-H01-1055,bq |
The issue of
<term>
system response
</term>
to
<term>
users
</term>
has been extensively studied by the
<term>
natural
language
generation community
</term>
, though rarely in the context of
<term>
dialog systems
</term>
.
|
#982
The issue of system response to users has been extensively studied by the natural language generation community, though rarely in the context of dialog systems. |
model,11-1-H01-1058,bq |
In this paper , we address the problem of combining several
<term>
language
models ( LMs )
</term>
.
|
#1038
In this paper, we address the problem of combining severallanguage models (LMs). |
tech,17-1-H01-1070,bq |
This paper proposes a practical approach employing
<term>
n-gram models
</term>
and
<term>
error-correction rules
</term>
for
<term>
Thai key prediction
</term>
and
<term>
Thai-English
language
identification
</term>
.
|
#1259
This paper proposes a practical approach employing n-gram models and error-correction rules for Thai key prediction and Thai-English language identification. |
other,10-5-P01-1007,bq |
The
<term>
non-deterministic parsing choices
</term>
of the
<term>
main parser
</term>
for a
<term>
language
L
</term>
are directed by a
<term>
guide
</term>
which uses the
<term>
shared derivation forest
</term>
output by a prior
<term>
RCL parser
</term>
for a suitable
<term>
superset of L.
|
#1710
The non-deterministic parsing choices of the main parser for alanguage L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
tech,6-1-P01-1008,bq |
While
<term>
paraphrasing
</term>
is critical both for
<term>
interpretation and generation of natural
language
</term>
, current systems use manual or semi-automatic methods to collect
<term>
paraphrases
</term>
.
|
#1764
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
tech,14-2-P01-1009,bq |
These
<term>
words
</term>
appear frequently enough in
<term>
dialog
</term>
to warrant serious
<term>
attention
</term>
, yet present
<term>
natural
language
search engines
</term>
perform poorly on
<term>
queries
</term>
containing them .
|
#1861
These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. |
tech,7-1-P01-1056,bq |
<term>
Techniques for automatically training
</term>
modules of a
<term>
natural
language
generator
</term>
have recently been proposed , but a fundamental concern is whether the
<term>
quality
</term>
of
<term>
utterances
</term>
produced with
<term>
trainable components
</term>
can compete with
<term>
hand-crafted template-based or rule-based approaches
</term>
.
|
#2020
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
other,11-4-N03-1001,bq |
The
<term>
classification accuracy
</term>
of the
<term>
method
</term>
is evaluated on three different
<term>
spoken
language
system domains
</term>
.
|
#2302
The classification accuracy of the method is evaluated on three different spoken language system domains. |
tech,14-1-N03-1004,bq |
Motivated by the success of
<term>
ensemble methods
</term>
in
<term>
machine learning
</term>
and other areas of
<term>
natural
language
processing
</term>
, we developed a
<term>
multi-strategy and multi-source approach to question answering
</term>
which is based on combining the results from different
<term>
answering agents
</term>
searching for
<term>
answers
</term>
in multiple
<term>
corpora
</term>
.
|
#2321
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
other,9-3-N03-1017,bq |
Our empirical results , which hold for all examined
<term>
language
pairs
</term>
, suggest that the highest levels of performance can be obtained through relatively simple means :
<term>
heuristic learning
</term>
of
<term>
phrase translations
</term>
from
<term>
word-based alignments
</term>
and
<term>
lexical weighting
</term>
of
<term>
phrase translations
</term>
.
|
#2597
Our empirical results, which hold for all examinedlanguage pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
tech,6-1-N03-2003,bq |
Sources of
<term>
training data
</term>
suitable for
<term>
language
modeling
</term>
of
<term>
conversational speech
</term>
are limited .
|
#3020
Sources of training data suitable forlanguage modeling of conversational speech are limited. |
model,28-1-N03-2006,bq |
In order to boost the
<term>
translation quality
</term>
of
<term>
EBMT
</term>
based on a small-sized
<term>
bilingual corpus
</term>
, we use an out-of-domain
<term>
bilingual corpus
</term>
and , in addition , the
<term>
language
model
</term>
of an in-domain
<term>
monolingual corpus
</term>
.
|
#3107
In order to boost the translation quality of EBMT based on a small-sized bilingual corpus, we use an out-of-domain bilingual corpus and, in addition, thelanguage model of an in-domain monolingual corpus. |
model,11-3-N03-2036,bq |
During
<term>
decoding
</term>
, we use a
<term>
block unigram model
</term>
and a
<term>
word-based trigram
language
model
</term>
.
|
#3441
During decoding, we use a block unigram model and a word-based trigram language model. |
tech,11-1-N03-3010,bq |
In this paper , we propose a novel
<term>
Cooperative Model
</term>
for
<term>
natural
language
understanding
</term>
in a
<term>
dialogue system
</term>
.
|
#3489
In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. |
tech,27-2-N03-4004,bq |
It gives users the ability to spend their time finding more data relevant to their task , and gives them translingual reach into other
<term>
languages
</term>
by leveraging
<term>
human
language
technology
</term>
.
|
#3632
It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languages by leveraging human language technology. |
tech,13-1-N03-4010,bq |
The
<term>
JAVELIN system
</term>
integrates a flexible ,
<term>
planning-based architecture
</term>
with a variety of
<term>
language
processing modules
</term>
to provide an
<term>
open-domain question answering capability
</term>
on
<term>
free text
</term>
.
|
#3648
The JAVELIN system integrates a flexible, planning-based architecture with a variety oflanguage processing modules to provide an open-domain question answering capability on free text. |
other,13-1-P03-1005,bq |
This paper proposes the
<term>
Hierarchical Directed Acyclic Graph ( HDAG ) Kernel
</term>
for
<term>
structured natural
language
data
</term>
.
|
#3804
This paper proposes the Hierarchical Directed Acyclic Graph (HDAG) Kernel for structured natural language data. |