|
by averaging the
<term>
statistics >
</term>
|
of
|
<term>
independently trained models
</term>
|
#17049
In addition, combination of the training speakers is done by averaging the statistics>of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. |
|
relationship ( e.g. social ranks and age )
|
of
|
the
<term>
referents
</term>
. This
<term>
referential
|
#8597
Honorifics are used extensively in Japanese, reflecting the social relationship (e.g. social ranks and age) of the referents. |
|
underspecified semantic representation ( USR )
</term>
|
of
|
a
<term>
scope ambiguity
</term>
, compute
|
#11137
We present an efficient algorithm for the redundancy elimination problem: Given an underspecified semantic representation (USR)of a scope ambiguity, compute an USR with fewer mutually equivalent readings. |
|
over
<term>
unstemmed text
</term>
, and 96 %
|
of
|
the performance of the proprietary
<term>
|
#4591
Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above. |
|
primarily based on
<term>
abduction
</term>
|
of
|
<term>
temporal relations
</term>
between
<term>
|
#17768
In this paper discourse segments are defined and a method for discourse segmentation primarily based on abductionof temporal relations between segments is proposed. |
|
always possible , so that in the absence
|
of
|
the required
<term>
world knowledge
</term>
|
#17517
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. |
|
attentional state
</term>
is an abstraction
|
of
|
the
<term>
focus of attention
</term>
of the
|
#14200
The attentional state is an abstraction of the focus of attention of the participants as the discourse unfolds. |
|
structures
</term>
in
<term>
abstracts
</term>
|
of
|
<term>
research articles
</term>
. In our approach
|
#11709
This paper introduces a method for computational analysis of move structures in abstractsof research articles. |
|
paradigm
</term>
, and the accomplishments
|
of
|
<term>
MADCOW
</term>
in monitoring the
<term>
|
#18585
We summarize the motivation for this effort, the goals, the implementation of a multi-site data collection paradigm, and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of spontaneous speech from five sites for use in a multi-site common evaluation of speech, natural language and spoken language |
|
</term>
. This paper gives an overall account
|
of
|
a prototype
<term>
natural language question
|
#12842
This paper gives an overall account of a prototype natural language question answering system, called Chat-80. |
|
The
<term>
classification accuracy
</term>
|
of
|
the
<term>
method
</term>
is evaluated on three
|
#2293
The classification accuracyof the method is evaluated on three different spoken language system domains. |
|
methodology to improve the
<term>
accuracy
</term>
|
of
|
a
<term>
term aggregation system
</term>
using
|
#6124
This paper proposes a new methodology to improve the accuracyof a term aggregation system using each author's text as a coherent corpus. |
|
method improves the
<term>
accuracy
</term>
|
of
|
our
<term>
term aggregation system
</term>
|
#6189
Our proposed method improves the accuracyof our term aggregation system, showing that our approach is successful. |
|
the
<term>
WSD
</term><term>
accuracy
</term>
|
of
|
<term>
SMT models
</term>
has never been evaluated
|
#7900
Surprisingly however, the WSD accuracyof SMT models has never been evaluated and compared with that of the dedicated WSD models. |
|
the
<term>
WSD
</term><term>
accuracy
</term>
|
of
|
current typical
<term>
SMT models
</term>
to
|
#7925
We present controlled experiments showing the WSD accuracyof current typical SMT models to be significantly lower than that of all the dedicated WSD models considered. |
tech,12-5-H01-1041,bq |
<term>
knowledge-based automated acquisition
|
of
|
grammars
</term>
. Having been trained on
|
#512
(iii) Rapid system development and porting to new domains via knowledge-based automated acquisition of grammars. |
tech,19-1-P03-1068,bq |
basis for the large-scale
<term>
acquisition
|
of
|
word-semantic information
</term>
, e.g.
|
#4952
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. |
|
defeasibility
</term>
. Manual acquisition
|
of
|
<term>
semantic constraints
</term>
in broad
|
#16606
Manual acquisition of semantic constraints in broad domains is very expensive. |
|
investigate how well the
<term>
addressee
</term>
|
of
|
a
<term>
dialogue act
</term>
can be predicted
|
#10261
First, we investigate how well the addresseeof a dialogue act can be predicted based on gaze, utterance and conversational context features. |
|
derivations
</term>
. The principle advantage
|
of
|
this approach is that knowledge concerning
|
#15066
The principle advantage of this approach is that knowledge concerning translation equivalence of expressions may be directly exploited, obviating the need for answers to semantic questions that we do not yet have. |