|
understanding and generation modules
</term>
mediated
|
by
|
a
<term>
language neutral meaning representation
|
#428
The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
|
users
</term>
has been extensively studied
|
by
|
the
<term>
natural language generation community
|
#979
The issue of system response to users has been extensively studied by the natural language generation community, though rarely in the context of dialog systems. |
|
generation systems
</term>
can be overcome
|
by
|
employing
<term>
machine learning techniques
|
#1021
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
|
<term>
word string
</term>
has been obtained
|
by
|
using a different
<term>
LM
</term>
. Actually
|
#1109
The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
|
the basis of
<term>
feedback
</term>
provided
|
by
|
<term>
human judges
</term>
. We reconceptualize
|
#1361
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. |
|
for a
<term>
language L
</term>
are directed
|
by
|
a
<term>
guide
</term>
which uses the
<term>
|
#1714
The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
|
<term>
shared derivation forest
</term>
output
|
by
|
a prior
<term>
RCL parser
</term>
for a suitable
|
#1724
The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
|
engine
</term>
can be improved dramatically
|
by
|
incorporating an approximation of the
<term>
|
#1884
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
for a
<term>
spoken dialogue system
</term>
|
by
|
eliciting
<term>
subjective human judgments
|
#2065
In this paper We experimentally evaluate a trainable sentence planner for a spoken dialogue systemby eliciting subjective human judgments. |
|
language system domains
</term>
. Motivated
|
by
|
the success of
<term>
ensemble methods
</term>
|
#2307
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
|
performance gains from the
<term>
data
</term>
|
by
|
using
<term>
class-dependent interpolation
|
#3072
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the databy using class-dependent interpolation of N-grams. |
|
</term>
for learning
<term>
morphology
</term>
|
by
|
identifying
<term>
hubs
</term>
in an
<term>
|
#3162
We describe a simple unsupervised technique for learning morphologyby identifying hubs in an automaton. |
|
a
<term>
corpus
</term>
automatically tagged
|
by
|
the first
<term>
learner
</term>
. The resulting
|
#3371
Then, a Hidden Markov Model is trained on a corpus automatically tagged by the first learner. |
|
translingual reach into other
<term>
languages
</term>
|
by
|
leveraging
<term>
human language technology
|
#3629
It gives users the ability to spend their time finding more data relevant to their task, and gives them translingual reach into other languagesby leveraging human language technology. |
|
</term>
of
<term>
data objects
</term>
created
|
by
|
the
<term>
system
</term>
during each
<term>
|
#3702
The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session. |
|
</term>
on both
<term>
systems
</term>
. Motivated
|
by
|
these arguments , we introduce a number
|
#4093
Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. |
|
<term>
models
</term>
are automatically derived
|
by
|
<term>
decision tree learning
</term>
using
|
#4358
Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. |
|
real
<term>
dialogue data
</term>
collected
|
by
|
the
<term>
system
</term>
. We obtained reasonable
|
#4367
Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. |
|
further improve the
<term>
stemmer
</term>
|
by
|
allowing it to adapt to a desired
<term>
|
#4498
Monolingual, unannotated text can be used to further improve the stemmerby allowing it to adapt to a desired domain or genre. |
|
approximate
<term>
Arabic 's rich morphology
</term>
|
by
|
a
<term>
model
</term>
that a
<term>
word
</term>
|
#4606
We approximate Arabic's rich morphologyby a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). |