|
record and store a
<term>
conversation
</term>
|
for
|
documentation . The question is , however
|
#33
Given the development of storage media and networks one could just record and store a conversationfor documentation. |
|
distributed message-passing infrastructure
</term>
|
for
|
<term>
dialogue systems
</term>
which all
<term>
|
#243
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructurefor dialogue systems which all Communicator participants are using. |
|
features of and
<term>
requirements
</term>
|
for
|
a genuinely useful
<term>
software infrastructure
|
#264
In this presentation, we describe the features of and requirementsfor a genuinely useful software infrastructure for this purpose. |
|
useful
<term>
software infrastructure
</term>
|
for
|
this purpose . In this paper we show how
|
#270
In this presentation, we describe the features of and requirements for a genuinely useful software infrastructurefor this purpose. |
|
<term>
translation output
</term>
sufficient
|
for
|
content understanding of the
<term>
original
|
#536
Having been trained on Korean newspaper articles on missiles and chemical biological warfare, the system produces the translation output sufficient for content understanding of the original document. |
|
evaluation techniques
</term>
, originally devised
|
for
|
the
<term>
evaluation
</term>
of
<term>
human
|
#562
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. |
|
Listen-Communicate-Show ( LCS )
</term>
is a new paradigm
|
for
|
<term>
human interaction with data sources
|
#788
Listen-Communicate-Show (LCS) is a new paradigm for human interaction with data sources. |
|
a
<term>
mobile , intelligent agent
</term>
|
for
|
execution at the appropriate
<term>
database
|
#857
The request is passed to a mobile, intelligent agentfor execution at the appropriate database. |
|
experimental results that clearly show the need
|
for
|
a
<term>
dynamic language model combination
|
#1140
We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. |
|
</term>
. We describe a three-tiered approach
|
for
|
<term>
evaluation
</term>
of
<term>
spoken dialogue
|
#1200
We describe a three-tiered approach for evaluation of spoken dialogue systems. |
|
</term>
and
<term>
error-correction rules
</term>
|
for
|
<term>
Thai key prediction
</term>
and
<term>
|
#1253
This paper proposes a practical approach employing n-gram models and error-correction rulesfor Thai key prediction and Thai-English language identification. |
|
choice of
<term>
syntactic structure
</term>
|
for
|
elementary
<term>
speech acts
</term>
and the
|
#1317
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structurefor elementary speech acts and the decision of how to combine them into one or more sentences. |
|
sentence planner
</term>
, and a new methodology
|
for
|
automatically training
<term>
SPoT
</term>
|
#1351
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. |
|
list of possible
<term>
sentence plans
</term>
|
for
|
a given
<term>
text-plan input
</term>
. Second
|
#1394
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plansfor a given text-plan input. |
|
choices
</term>
of the
<term>
main parser
</term>
|
for
|
a
<term>
language L
</term>
are directed by
|
#1708
The non-deterministic parsing choices of the main parserfor a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parser for a suitable superset of L. |
|
output by a prior
<term>
RCL parser
</term>
|
for
|
a suitable
<term>
superset of L. The results
|
#1729
The non-deterministic parsing choices of the main parser for a language L are directed by a guide which uses the shared derivation forest output by a prior RCL parserfor a suitable superset of L. |
|
<term>
paraphrasing
</term>
is critical both
|
for
|
<term>
interpretation and generation of natural
|
#1758
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
|
<term>
unsupervised learning algorithm
</term>
|
for
|
<term>
identification of paraphrases
</term>
|
#1783
We present an unsupervised learning algorithmfor identification of paraphrases from a corpus of multiple English translations of the same source text. |
|
paper presents a
<term>
formal analysis
</term>
|
for
|
a large class of
<term>
words
</term>
called
|
#1821
This paper presents a formal analysisfor a large class of words called alternative markers, which includes other (than), such (as), and besides. |
tech,0-1-P01-1056,bq |
<term>
logical form
</term>
.
<term>
Techniques
|
for
|
automatically training
</term>
modules of
|
#2013
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |