|
interesting information piece would be found
|
in
|
a
<term>
large database
</term>
. Traditional
|
#50
The question is, however, how an interesting information piece would be found in a large database. |
|
automatic detection
</term>
of those activities
|
in
|
meeting situation and everyday rejoinders
|
#119
This paper addresses the problem of the automatic detection of those activities in meeting situation and everyday rejoinders. |
|
obtained . To support engaging human users
|
in
|
robust ,
<term>
mixed-initiative speech dialogue
|
#211
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. |
|
</term>
which reach beyond current capabilities
|
in
|
<term>
dialogue systems
</term>
, the
<term>
|
#223
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. |
|
We describe how this information is used
|
in
|
a
<term>
prototype system
</term>
designed
|
#320
We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function. |
|
systems
</term>
. This , the first experiment
|
in
|
a series of experiments , looks at the
<term>
|
#614
This, the first experiment in a series of experiments, looks at the intelligibility of MT output. |
|
native from non-native language essays
</term>
|
in
|
less than 100
<term>
words
</term>
. Even more
|
#642
A language learning experiment showed that assessors can differentiate native from non-native language essaysin less than 100 words. |
|
preliminary analysis of the factors involved
|
in
|
the decision making process will be presented
|
#770
The results of this experiment, along with a preliminary analysis of the factors involved in the decision making process will be presented here. |
|
complete . We have demonstrated this capability
|
in
|
several field exercises with the Marines
|
#892
We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains. |
|
applications of this
<term>
technology
</term>
|
in
|
<term>
new domains
</term>
. Recent advances
|
#907
We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technologyin new domains. |
|
<term>
new domains
</term>
. Recent advances
|
in
|
<term>
Automatic Speech Recognition technology
|
#913
Recent advances in Automatic Speech Recognition technology have put the goal of naturally sounding dialog systems within reach. |
|
generation community
</term>
, though rarely
|
in
|
the context of
<term>
dialog systems
</term>
|
#988
The issue of system response to users has been extensively studied by the natural language generation community, though rarely in the context of dialog systems. |
|
dialog systems
</term>
. We show how research
|
in
|
<term>
generation
</term>
can be adapted to
|
#999
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
|
</term>
. We describe our use of this approach
|
in
|
numerous fielded
<term>
user studies
</term>
|
#1230
We describe our use of this approach in numerous fielded user studies conducted with the U.S. military. |
|
reported more than 99 %
<term>
accuracy
</term>
|
in
|
both
<term>
language identification
</term>
|
#1285
Our algorithm reported more than 99% accuracyin both language identification and key prediction. |
|
character - and word-segmented data
</term>
,
|
in
|
combination with a range of
<term>
local
|
#1513
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
|
<term>
local segment contiguity models
</term>
(
|
in
|
the form of
<term>
N-grams
</term>
) . Over
|
#1524
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models ( in the form of N-grams). |
|
<term>
word N-gram models
</term>
. Further ,
|
in
|
their optimum
<term>
configuration
</term>
|
#1561
Further, in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
|
<term>
segment order-sensitive methods
</term>
|
in
|
terms of
<term>
retrieval accuracy
</term>
|
#1577
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methodsin terms of retrieval accuracy, but much faster. |
|
attractive properties which may be used
|
in
|
<term>
NLP
</term>
. In particular ,
<term>
range
|
#1618
The theoretical study of the range concatenation grammar [RCG] formalism has revealed many attractive properties which may be used in NLP. |