#460The key features of the system include: (i) Robust efficient parsing of Korean (a verb final languagewith overt case markers, relatively free word order, and frequent omissions of arguments).
. The results of this experiment , along
with
a preliminary analysis of the factors involved
#762The results of this experiment, along with a preliminary analysis of the factors involved in the decision making process will be presented here.
tech,9-1-H01-1049,ak
new paradigm for
<term>
human interaction
with
data sources
</term>
. We integrate a
<term>
#791Listen-Communicate-Show (LCS) is a new paradigm for human interaction with data sources.
spoken language understanding system
</term>
with
<term>
intelligent mobile agents
</term>
that
#802We integrate a spoken language understanding systemwith intelligent mobile agents that mediate between users and information sources.
</term>
, tactical personnel can converse
with
their logistics system to place a
<term>
#835Using LCS-Marine, tactical personnel can converse with their logistics system to place a supply or information request.
this capability in several field exercises
with
the Marines and are currently developing
#896We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains.
</term>
and selects the
<term>
word string
</term>
with
the best
<term>
performance
</term>
( typically
#1082The oracle knows the reference word string and selects the word stringwith the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
acts like a
<term>
dynamic combiner
</term>
with
hard decisions using the
<term>
reference
#1124Actually, the oracle acts like a dynamic combinerwith hard decisions using the reference.
method amounts to tagging
<term>
LMs
</term>
with
<term>
confidence measures
</term>
and picking
#1178The method amounts to tagging LMswith confidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
</term>
corresponding to the
<term>
LM
</term>
with
the best
<term>
confidence
</term>
. We describe
#1190The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LMwith the best confidence.
numerous fielded user studies conducted
with
the U.S. military . This paper proposes
#1236We describe our use of this approach in numerous fielded user studies conducted with the U.S. military.
and word-segmented data , in combination
with
a range of
<term>
local segment contiguity
#1515We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams).
of the formal analysis that is compatible
with
the
<term>
search engine 's operational semantics
#1896I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics.
</term>
of
<term>
utterances
</term>
produced
with
trainable components can compete with
<term>
#2039Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches.
produced with trainable components can compete
with
<term>
hand-crafted template-based or rule-based
#2044Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches.
domain independent acoustic models
</term>
with
<term>
off-the-shelf classifiers
</term>
to
#2231The method combines domain independent acoustic modelswith off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
<term>
output
</term>
of
<term>
recognition
</term>
with
this
<term>
model
</term>
is then passed to
#2280In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognitionwith this model is then passed to a phone-string classifier.
for use in
<term>
error correction
</term>
,
with
a focus on
<term>
post-processing
</term>
the
#2723The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks.
the proposed system is state-of-the-art ,
with
guaranteed
<term>
grammaticality
</term>
of
#2894Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator.
<term>
training data
</term>
can be supplemented
with
<term>
text
</term>
from the
<term>
web
</term>
#3041In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.