|
We address appropriate
<term>
user modeling
</term>
in order to generate
<term>
cooperative responses
</term>
to
each
<term>
user
</term>
in
<term>
spoken dialogue systems
</term>
.
|
#4291
We address appropriate user modeling in order to generate cooperative responses to each user in spoken dialogue systems. |
|
A method is described of combining the two styles in a single system in order to take advantage of the strengths of
each
.
|
#12183
A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. |
|
Its selection of
<term>
fonts
</term>
, specification of
<term>
character
</term>
size are dynamically changeable , and the
<term>
typing location
</term>
can be also changed in lateral or longitudinal directions .
Each
<term>
character
</term>
has its own width and a line length is counted by the sum of each
<term>
character
</term>
.
|
#12292
Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. Each character has its own width and a line length is counted by the sum of each character. |
|
The system identities a strength of
<term>
antecedence recovery
</term>
for each of the
<term>
lexical substitutions
</term>
, and matches them against the
<term>
strength of potential antecedence
</term>
of
each
element in the
<term>
text
</term>
to select the proper
<term>
substitutions
</term>
for these elements .
|
#13812
The system identities a strength of antecedence recovery for each of the lexical substitutions, and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements. |
|
As
each
new
<term>
edge
</term>
is added to the
<term>
chart
</term>
, the algorithm checks only the topmost of the
<term>
edges
</term>
adjacent to it , rather than all such
<term>
edges
</term>
as in conventional treatments .
|
#17601
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
The
<term>
intentional structure
</term>
captures the
<term>
discourse-relevant purposes
</term>
, expressed in
each
of the
<term>
linguistic segments
</term>
as well as relationships among them .
|
#14182
The intentional structure captures the discourse-relevant purposes, expressed in each of the linguistic segments as well as relationships among them. |
|
The
<term>
attentional state
</term>
, being dynamic , records the objects , properties , and relations that are salient at
each
point of the
<term>
discourse
</term>
.
|
#14232
The attentional state, being dynamic, records the objects, properties, and relations that are salient at each point of the discourse. |
|
We take a selection of both
<term>
bag-of-words and segment order-sensitive string comparison methods
</term>
, and run
each
over both
<term>
character - and word-segmented data
</term>
, in combination with a range of
<term>
local segment contiguity models
</term>
( in the form of
<term>
N-grams
</term>
) .
|
#1504
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
|
According to our assumption , most of the words with similar
<term>
context features
</term>
in
each
author 's
<term>
corpus
</term>
tend not to be
<term>
synonymous expressions
</term>
.
|
#6172
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
|
We train a
<term>
maximum entropy classifier
</term>
that , given a pair of
<term>
sentences
</term>
, can reliably determine whether or not they are
<term>
translations
</term>
of
each
other .
|
#9022
We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. |
|
Since multiple
<term>
candidates
</term>
for the
<term>
understanding
</term>
result can be obtained for a
<term>
user utterance
</term>
due to the
<term>
ambiguity
</term>
of
<term>
speech understanding
</term>
, it is not appropriate to decide on a single
<term>
understandingresult
</term>
after
each
<term>
user utterance
</term>
.
|
#4188
Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understandingresult after each user utterance. |
|
A
<term>
probabilistic spectral mapping
</term>
is estimated independently for each
<term>
training ( reference ) speaker
</term>
and the
<term>
target speaker
</term>
.
Each
<term>
reference model
</term>
is transformed to the
<term>
space
</term>
of the
<term>
target speaker
</term>
and combined by
<term>
averaging
</term>
.
|
#17170
A probabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker. Each reference model is transformed to the space of the target speaker and combined by averaging. |
|
Thus we have implemented a
<term>
blackboard-like architecture
</term>
in which individual
<term>
partial theories
</term>
can be encoded as separate modules that can interact to propose candidate
<term>
antecedents
</term>
and to evaluate
each
other 's proposals .
|
#14995
Thus we have implemented a blackboard-like architecture in which individual partial theories can be encoded as separate modules that can interact to propose candidate antecedents and to evaluate each other's proposals. |
|
The operation of the
<term>
system
</term>
will be explained in depth through browsing the
<term>
repository
</term>
of
<term>
data objects
</term>
created by the
<term>
system
</term>
during
each
<term>
question answering session
</term>
.
|
#3706
The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session. |
|
The
<term>
oracle
</term>
knows the
<term>
reference word string
</term>
and selects the
<term>
word string
</term>
with the best
<term>
performance
</term>
( typically ,
<term>
word or semantic error rate
</term>
) from a list of
<term>
word strings
</term>
, where
each
<term>
word string
</term>
has been obtained by using a different
<term>
LM
</term>
.
|
#1103
The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
|
For
<term>
pragmatics processing
</term>
, we describe how the method of
<term>
abductive inference
</term>
is inherently robust , in that an interpretation is always possible , so that in the absence of the required
<term>
world knowledge
</term>
, performance degrades gracefully .
Each
of these techniques have been evaluated and the results of the evaluations are presented .
|
#17527
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techniques have been evaluated and the results of the evaluations are presented. |
|
Although every
<term>
natural language system
</term>
needs a
<term>
computational lexicon
</term>
,
each
<term>
system
</term>
puts different amounts and types of information into its
<term>
lexicon
</term>
according to its individual needs .
|
#15925
Although every natural language system needs a computational lexicon, each system puts different amounts and types of information into its lexicon according to its individual needs. |
|
Because a
<term>
speaker
</term>
and
<term>
listener
</term>
can not be assured to have the same
<term>
beliefs
</term>
,
<term>
contexts
</term>
,
<term>
perceptions
</term>
,
<term>
backgrounds
</term>
, or
<term>
goals
</term>
, at
each
point in a
<term>
conversation
</term>
, difficulties and mistakes arise when a
<term>
listener
</term>
interprets a
<term>
speaker 's utterance
</term>
.
|
#14435
Because a speaker and listener cannot be assured to have the same beliefs, contexts, perceptions, backgrounds, or goals, at each point in a conversation, difficulties and mistakes arise when a listener interprets a speaker's utterance. |
|
This paper discusses a
<term>
method of analyzing metaphors
</term>
based on the existence of a small number of
<term>
generalized metaphor mappings
</term>
.
Each
<term>
generalized metaphor
</term>
contains a
<term>
recognition network
</term>
, a
<term>
basic mapping
</term>
, additional
<term>
transfer mappings
</term>
, and an
<term>
implicit intention component
</term>
.
|
#12476
This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. |
|
The base
<term>
parser
</term>
produces a set of
<term>
candidate parses
</term>
for
each
input
<term>
sentence
</term>
, with associated
<term>
probabilities
</term>
that define an initial
<term>
ranking
</term>
of these
<term>
parses
</term>
.
|
#8673
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |