|
<term>
Communicator
</term>
participants are
|
using
|
. In this presentation , we describe the
|
#251
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. |
|
our best condition for this test suite ,
|
using
|
109
<term>
training speakers
</term>
. Second
|
#17114
This performance is comparable to our best condition for this test suite, using 109 training speakers. |
|
models
</term>
. The models were constructed
|
using
|
a 5K
<term>
vocabulary
</term>
and trained
|
#21246
The models were constructed using a 5K vocabulary and trained using a 76 million word Wall Street Journal text corpus. |
|
using a 5K
<term>
vocabulary
</term>
and trained
|
using
|
a 76 million
<term>
word
</term><term>
Wall
|
#21252
The models were constructed using a 5K vocabulary and trained using a 76 million word Wall Street Journal text corpus. |
|
shown that these results can be improved
|
using
|
a bigger and a more homogeneous
<term>
corpus
|
#11293
Finally, we have shown that these results can be improved using a bigger and a more homogeneous corpus to train, that is, a bigger corpus written by one unique author. |
|
Path-based inference rules
</term>
may be written
|
using
|
a
<term>
binary relational calculus notation
|
#12099
Path-based inference rules may be written using a binary relational calculus notation. |
|
<term>
word string
</term>
has been obtained by
|
using
|
a different
<term>
LM
</term>
. Actually ,
|
#1110
The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
|
parser
</term>
skips that
<term>
portion
</term>
|
using
|
a fake
<term>
non-terminal symbol
</term>
.
|
#18182
When at very noisy portion is detected, the parser skips that portionusing a fake non-terminal symbol. |
|
</term>
in
<term>
unannotated text
</term>
by
|
using
|
a fully automatic sequence of
<term>
preprocessing
|
#7083
Furthermore, we present a standalone system that resolves pronouns in unannotated text by using a fully automatic sequence of preprocessing modules that mimics the manual annotation process. |
|
</term>
instead of the traditional practice of
|
using
|
a little
<term>
speech
</term>
from many
<term>
|
#17026
First, we present a new paradigm for speaker-independent (SI) training of hidden Markov models (HMM), which uses a large amount of speech from a few speakers instead of the traditional practice of using a little speech from many speakers. |
|
differs from that of Pereira and Shieber by
|
using
|
a
<term>
logical model
</term>
in place of
|
#14761
Our interpretation differs from that of Pereira and Shieber by using a logical model in place of a denotational semantics. |
|
mimics the behavior of the
<term>
oracle
</term>
|
using
|
a
<term>
neural network
</term>
or a
<term>
decision
|
#1163
We suggest a method that mimics the behavior of the oracleusing a neural network or a decision tree. |
|
one
<term>
language
</term>
can be identified
|
using
|
a
<term>
phrase
</term>
in another language
|
#9706
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. |
|
paraphrase extractio and ranking methods
</term>
|
using
|
a set of
<term>
manual word alignments
</term>
|
#9759
We evaluate our paraphrase extractio and ranking methodsusing a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments. |
|
corpora
</term>
is presented which involves
|
using
|
a
<term>
statistical POS tagger
</term>
in
|
#19958
A novel method for adding linguistic annotation to corpora is presented which involves using a statistical POS tagger in conjunction with unsupervised structure finding methods to derive notions of noun group, verb group, and so on which is inherently extensible to more sophisticated annotation, and does not require a pre-tagged corpus to fit. |
|
mixture trigram models
</term>
as compared to
|
using
|
a
<term>
trigram model
</term>
. This paper
|
#21286
Using the BU recognition system, experiments show a 7% improvement in recognition accuracy with the mixture trigram models as compared to using a trigram model. |
|
constructed in a
<term>
semantic network
</term>
|
using
|
a variant of a
<term>
predicate calculus
|
#12139
Node-based inference rules can be constructed in a semantic networkusing a variant of a predicate calculus notation. |
|
systems still treat
<term>
coordination
</term>
|
using
|
adapted
<term>
parsing strategies
</term>
,
|
#21150
Despite the large amount of theoretical work done on non-constituent coordination during the last two decades, many computational systems still treat coordinationusing adapted parsing strategies, in a similar fashion to the SYSCONJ system developed for ATNs. |
|
improve upon this initial
<term>
ranking
</term>
,
|
using
|
additional
<term>
features
</term>
of the
<term>
|
#8701
A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. |
|
for
<term>
Japanese sentence analyses
</term>
|
using
|
an
<term>
argumentation system
</term>
by Konolige
|
#16579
This paper proposes that sentence analysis should be treated as defeasible reasoning, and presents such a treatment for Japanese sentence analysesusing an argumentation system by Konolige, which is a formalization of defeasible reasoning, that includes arguments and defeat rules that capture defeasibility. |