|
Although every
<term>
natural language system
</term>
needs a
<term>
computational lexicon
</term>
,
each
<term>
system
</term>
puts different amounts and types of information into its
<term>
lexicon
</term>
according to its individual needs .
|
#15925
Although every natural language system needs a computational lexicon, each system puts different amounts and types of information into its lexicon according to its individual needs. |
|
Its selection of
<term>
fonts
</term>
, specification of
<term>
character
</term>
size are dynamically changeable , and the
<term>
typing location
</term>
can be also changed in lateral or longitudinal directions .
Each
<term>
character
</term>
has its own width and a line length is counted by the sum of each
<term>
character
</term>
.
|
#12292
Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. Each character has its own width and a line length is counted by the sum of each character. |
|
For
<term>
pragmatics processing
</term>
, we describe how the method of
<term>
abductive inference
</term>
is inherently robust , in that an interpretation is always possible , so that in the absence of the required
<term>
world knowledge
</term>
, performance degrades gracefully .
Each
of these techniques have been evaluated and the results of the evaluations are presented .
|
#17527
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techniques have been evaluated and the results of the evaluations are presented. |
|
This paper discusses a
<term>
method of analyzing metaphors
</term>
based on the existence of a small number of
<term>
generalized metaphor mappings
</term>
.
Each
<term>
generalized metaphor
</term>
contains a
<term>
recognition network
</term>
, a
<term>
basic mapping
</term>
, additional
<term>
transfer mappings
</term>
, and an
<term>
implicit intention component
</term>
.
|
#12476
This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. |
|
A
<term>
probabilistic spectral mapping
</term>
is estimated independently for each
<term>
training ( reference ) speaker
</term>
and the
<term>
target speaker
</term>
.
Each
<term>
reference model
</term>
is transformed to the
<term>
space
</term>
of the
<term>
target speaker
</term>
and combined by
<term>
averaging
</term>
.
|
#17170
A probabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker. Each reference model is transformed to the space of the target speaker and combined by averaging. |
|
Since multiple
<term>
candidates
</term>
for the
<term>
understanding
</term>
result can be obtained for a
<term>
user utterance
</term>
due to the
<term>
ambiguity
</term>
of
<term>
speech understanding
</term>
, it is not appropriate to decide on a single
<term>
understandingresult
</term>
after
each
<term>
user utterance
</term>
.
|
#4188
Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understandingresult after each user utterance. |
|
As
each
new
<term>
edge
</term>
is added to the
<term>
chart
</term>
, the algorithm checks only the topmost of the
<term>
edges
</term>
adjacent to it , rather than all such
<term>
edges
</term>
as in conventional treatments .
|
#17601
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
Because a
<term>
speaker
</term>
and
<term>
listener
</term>
can not be assured to have the same
<term>
beliefs
</term>
,
<term>
contexts
</term>
,
<term>
perceptions
</term>
,
<term>
backgrounds
</term>
, or
<term>
goals
</term>
, at
each
point in a
<term>
conversation
</term>
, difficulties and mistakes arise when a
<term>
listener
</term>
interprets a
<term>
speaker 's utterance
</term>
.
|
#14435
Because a speaker and listener cannot be assured to have the same beliefs, contexts, perceptions, backgrounds, or goals, at each point in a conversation, difficulties and mistakes arise when a listener interprets a speaker's utterance. |
|
The
<term>
attentional state
</term>
, being dynamic , records the objects , properties , and relations that are salient at
each
point of the
<term>
discourse
</term>
.
|
#14232
The attentional state, being dynamic, records the objects, properties, and relations that are salient at each point of the discourse. |
|
The operation of the
<term>
system
</term>
will be explained in depth through browsing the
<term>
repository
</term>
of
<term>
data objects
</term>
created by the
<term>
system
</term>
during
each
<term>
question answering session
</term>
.
|
#3706
The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session. |
|
Thus we have implemented a
<term>
blackboard-like architecture
</term>
in which individual
<term>
partial theories
</term>
can be encoded as separate modules that can interact to propose candidate
<term>
antecedents
</term>
and to evaluate
each
other 's proposals .
|
#14995
Thus we have implemented a blackboard-like architecture in which individual partial theories can be encoded as separate modules that can interact to propose candidate antecedents and to evaluate each other's proposals. |
|
A
<term>
probabilistic spectral mapping
</term>
is estimated independently for
each
<term>
training ( reference ) speaker
</term>
and the
<term>
target speaker
</term>
.
|
#17159
A probabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker. |
|
The base
<term>
parser
</term>
produces a set of
<term>
candidate parses
</term>
for
each
input
<term>
sentence
</term>
, with associated
<term>
probabilities
</term>
that define an initial
<term>
ranking
</term>
of these
<term>
parses
</term>
.
|
#8673
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
|
The system identities a strength of
<term>
antecedence recovery
</term>
for
each
of the
<term>
lexical substitutions
</term>
, and matches them against the
<term>
strength of potential antecedence
</term>
of each element in the
<term>
text
</term>
to select the proper
<term>
substitutions
</term>
for these elements .
|
#13796
The system identities a strength of antecedence recovery for each of the lexical substitutions, and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements. |
|
The
<term>
intentional structure
</term>
captures the
<term>
discourse-relevant purposes
</term>
, expressed in
each
of the
<term>
linguistic segments
</term>
as well as relationships among them .
|
#14182
The intentional structure captures the discourse-relevant purposes, expressed in each of the linguistic segments as well as relationships among them. |
|
According to our assumption , most of the words with similar
<term>
context features
</term>
in
each
author 's
<term>
corpus
</term>
tend not to be
<term>
synonymous expressions
</term>
.
|
#6172
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
|
The system identities a strength of
<term>
antecedence recovery
</term>
for each of the
<term>
lexical substitutions
</term>
, and matches them against the
<term>
strength of potential antecedence
</term>
of
each
element in the
<term>
text
</term>
to select the proper
<term>
substitutions
</term>
for these elements .
|
#13812
The system identities a strength of antecedence recovery for each of the lexical substitutions, and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements. |
|
For many reasons , it is highly desirable to accurately estimate the
<term>
confidence
</term>
the system has in the correctness of
each
<term>
extracted field
</term>
.
|
#6807
For many reasons, it is highly desirable to accurately estimate the confidence the system has in the correctness of each extracted field. |
|
A method is described of combining the two styles in a single system in order to take advantage of the strengths of
each
.
|
#12183
A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. |
|
Each
<term>
character
</term>
has its own width and a line length is counted by the sum of
each
<term>
character
</term>
.
|
#12308
Each character has its own width and a line length is counted by the sum of each character. |