|
</term>
( a
<term>
verb final language
</term>
|
with
|
<term>
overt case markers
</term>
, relatively
|
#460
The key features of the system include: (i) Robust efficient parsing of Korean (a verb final languagewith overt case markers, relatively free word order, and frequent omissions of arguments). |
|
. The results of this experiment , along
|
with
|
a preliminary analysis of the factors involved
|
#762
The results of this experiment, along with a preliminary analysis of the factors involved in the decision making process will be presented here. |
tech,9-1-H01-1049,bq |
new paradigm for
<term>
human interaction
|
with
|
data sources
</term>
. We integrate a
<term>
|
#791
Listen-Communicate-Show (LCS) is a new paradigm for human interaction with data sources. |
|
spoken language understanding system
</term>
|
with
|
<term>
intelligent mobile agents
</term>
that
|
#802
We integrate a spoken language understanding systemwith intelligent mobile agents that mediate between users and information sources. |
|
</term>
, tactical personnel can converse
|
with
|
their logistics system to place a supply
|
#835
Using LCS-Marine, tactical personnel can converse with their logistics system to place a supply or information request. |
|
this capability in several field exercises
|
with
|
the Marines and are currently developing
|
#896
We have demonstrated this capability in several field exercises with the Marines and are currently developing applications of this technology in new domains. |
|
</term>
and selects the
<term>
word string
</term>
|
with
|
the best
<term>
performance
</term>
( typically
|
#1082
The oracle knows the reference word string and selects the word stringwith the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
|
acts like a
<term>
dynamic combiner
</term>
|
with
|
<term>
hard decisions
</term>
using the
<term>
|
#1124
Actually, the oracle acts like a dynamic combinerwith hard decisions using the reference. |
|
method amounts to tagging
<term>
LMs
</term>
|
with
|
<term>
confidence measures
</term>
and picking
|
#1178
The method amounts to tagging LMswith confidence measures and picking the best hypothesis corresponding to the LM with the best confidence. |
|
</term>
corresponding to the
<term>
LM
</term>
|
with
|
the best
<term>
confidence
</term>
. We describe
|
#1190
The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LMwith the best confidence. |
|
fielded
<term>
user studies
</term>
conducted
|
with
|
the U.S. military . This paper proposes
|
#1236
We describe our use of this approach in numerous fielded user studies conducted with the U.S. military. |
|
word-segmented data
</term>
, in combination
|
with
|
a range of
<term>
local segment contiguity
|
#1515
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
|
formal analysis
</term>
that is compatible
|
with
|
the
<term>
search engine
</term>
's
<term>
operational
|
#1895
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
</term>
of
<term>
utterances
</term>
produced
|
with
|
<term>
trainable components
</term>
can compete
|
#2038
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
|
<term>
trainable components
</term>
can compete
|
with
|
<term>
hand-crafted template-based or rule-based
|
#2043
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
|
domain independent acoustic models
</term>
|
with
|
off-the-shelf
<term>
classifiers
</term>
to
|
#2230
The method combines domain independent acoustic modelswith off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
|
<term>
output
</term>
of
<term>
recognition
</term>
|
with
|
this
<term>
model
</term>
is then passed to
|
#2279
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognitionwith this model is then passed to a phone-string classifier. |
|
for use in
<term>
error correction
</term>
,
|
with
|
a focus on
<term>
post-processing
</term>
the
|
#2722
The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
|
<term>
system
</term>
is state-of-the-art ,
|
with
|
guaranteed
<term>
grammaticality
</term>
of
|
#2893
Overall summarization quality of the proposed system is state-of-the-art, with guaranteed grammaticality of the system output due to the use of a constraint-based parser/generator. |
|
<term>
training data
</term>
can be supplemented
|
with
|
<term>
text
</term>
from the
<term>
web
</term>
|
#3040
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |