|
order to take advantage of the strengths of
|
each
|
. Applications of
<term>
path-based inference
|
#12183
A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. |
|
with similar
<term>
context features
</term>
in
|
each
|
author 's
<term>
corpus
</term>
tend not to
|
#6172
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |
|
<term>
term aggregation system
</term>
using
|
each
|
author 's text as a coherent
<term>
corpus
|
#6130
This paper proposes a new methodology to improve the accuracy of a term aggregation system using each author's text as a coherent corpus. |
|
and a line length is counted by the sum of
|
each
|
<term>
character
</term>
. By using commands
|
#12308
Each character has its own width and a line length is counted by the sum of each character. |
|
in lateral or longitudinal directions .
|
Each
|
<term>
character
</term>
has its own width
|
#12292
Its selection of fonts, specification of character size are dynamically changeable, and the typing location can be also changed in lateral or longitudinal directions. Each character has its own width and a line length is counted by the sum of each character. |
|
strength of potential antecedence
</term>
of
|
each
|
element in the
<term>
text
</term>
to select
|
#13812
The system identities a strength of antecedence recovery for each of the lexical substitutions, and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements. |
|
</term>
the system has in the correctness of
|
each
|
<term>
extracted field
</term>
. The
<term>
information
|
#6807
For many reasons, it is highly desirable to accurately estimate the confidence the system has in the correctness of each extracted field. |
|
<term>
generalized metaphor mappings
</term>
.
|
Each
|
<term>
generalized metaphor
</term>
contains
|
#12476
This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. |
|
a set of
<term>
candidate parses
</term>
for
|
each
|
input
<term>
sentence
</term>
, with associated
|
#8673
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
|
</term>
of its
<term>
search space
</term>
. As
|
each
|
new
<term>
edge
</term>
is added to the
<term>
|
#17601
As each new edge is added to the chart, the algorithm checks only the topmost of the edges adjacent to it, rather than all such edges as in conventional treatments. |
|
strength of
<term>
antecedence recovery
</term>
for
|
each
|
of the
<term>
lexical substitutions
</term>
|
#13796
The system identities a strength of antecedence recovery for each of the lexical substitutions, and matches them against the strength of potential antecedence of each element in the text to select the proper substitutions for these elements. |
|
discourse-relevant purposes
</term>
, expressed in
|
each
|
of the
<term>
linguistic segments
</term>
as
|
#14182
The intentional structure captures the discourse-relevant purposes, expressed in each of the linguistic segments as well as relationships among them. |
|
</term>
, performance degrades gracefully .
|
Each
|
of these techniques have been evaluated
|
#17527
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techniques have been evaluated and the results of the evaluations are presented. |
|
not they are
<term>
translations
</term>
of
|
each
|
other . Using this
<term>
approach
</term>
|
#9022
We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. |
|
candidate
<term>
antecedents
</term>
and to evaluate
|
each
|
other 's proposals . This paper discusses
|
#14995
Thus we have implemented a blackboard-like architecture in which individual partial theories can be encoded as separate modules that can interact to propose candidate antecedents and to evaluate each other's proposals. |
|
string comparison methods
</term>
, and run
|
each
|
over both
<term>
character - and word-segmented
|
#1504
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
|
backgrounds
</term>
, or
<term>
goals
</term>
, at
|
each
|
point in a
<term>
conversation
</term>
, difficulties
|
#14435
Because a speaker and listener cannot be assured to have the same beliefs, contexts, perceptions, backgrounds, or goals, at each point in a conversation, difficulties and mistakes arise when a listener interprets a speaker's utterance. |
|
properties , and relations that are salient at
|
each
|
point of the
<term>
discourse
</term>
. The
|
#14232
The attentional state, being dynamic, records the objects, properties, and relations that are salient at each point of the discourse. |
|
</term>
created by the
<term>
system
</term>
during
|
each
|
<term>
question answering session
</term>
.
|
#3706
The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session. |
|
</term>
and the
<term>
target speaker
</term>
.
|
Each
|
<term>
reference model
</term>
is transformed
|
#17170
A probabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker. Each reference model is transformed to the space of the target speaker and combined by averaging. |