|
</term>
, the
<term>
theory
</term>
specifies
|
how
|
different information in
<term>
memory
</term>
|
#11951
Unlike logic, the theory specifies how different information in memory affects the certainty of the conclusions drawn. |
|
standard
<term>
text browser
</term>
. We describe
|
how
|
this information is used in a
<term>
prototype
|
#315
We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function. |
|
inference types
</term>
. The paper also discusses
|
how
|
<term>
memory
</term>
is structured in multiple
|
#12022
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
context of
<term>
dialog systems
</term>
. We show
|
how
|
research in
<term>
generation
</term>
can be
|
#997
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
|
results are presented , that demonstrate
|
how
|
the proposed
<term>
method
</term>
allows to
|
#7421
Experimental results are presented, that demonstrate how the proposed method allows to better generalize from the training data. |
|
array-based data structure
</term>
. We show
|
how
|
<term>
sampling
</term>
can be used to reduce
|
#9180
We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. |
|
set representation
</term>
. We investigate
|
how
|
sets of individually high-precision
<term>
|
#20071
We investigate how sets of individually high-precision rules can result in a low precision when used together, and develop some theory about these probably-correct rules. |
|
probabilities
</term>
is unstable . Finally , we show
|
how
|
this new
<term>
tagger
</term>
achieves state-of-the-art
|
#5599
Finally, we show how this new tagger achieves state-of-the-art results in a supervised, non-training intensive framework. |
|
Discourse processing
</term>
requires recognizing
|
how
|
the
<term>
utterances
</term>
of the
<term>
discourse
|
#14330
Discourse processing requires recognizing how the utterances of the discourse aggregate into segments, recognizing the intentions expressed in the discourse and the relationships among intentions, and tracking the discourse through the operation of the mechanisms associated with attentional state. |
|
</term>
are limited . In this paper , we show
|
how
|
<term>
training data
</term>
can be supplemented
|
#3034
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
classifiers
</term>
. First , we investigate
|
how
|
well the
<term>
addressee
</term>
of a
<term>
|
#10257
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
|
different
<term>
inference types
</term>
, and
|
how
|
the information found in
<term>
memory
</term>
|
#12037
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
broad range of
<term>
texts
</term>
to show
|
how
|
the distribution of
<term>
demonstrative
|
#15202
We examine a broad range of texts to show how the distribution of demonstrative forms and functions is genre dependent. |
|
documentation . The question is , however ,
|
how
|
an interesting information piece would
|
#42
The question is, however, how an interesting information piece would be found in a large database. |
|
Zernik87 ] . Second , we show in this paper
|
how
|
a
<term>
lexical hierarchy
</term>
is used
|
#15875
Second, we show in this paper how a lexical hierarchy is used in predicting new linguistic concepts. |
|
pragmatics processing
</term>
, we describe
|
how
|
the method of
<term>
abductive inference
</term>
|
#17494
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. |
|
<term>
monolingual UCG
</term>
, we will show
|
how
|
the two can be integrated , and present
|
#15140
After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bi-directional Engllsh-Spanish fragment. |
|
time
</term>
. Furthermore , we will show
|
how
|
some
<term>
evaluation measures
</term>
can
|
#10390
Furthermore, we will show how some evaluation measures can be improved by the introduction of word-dependent substitution costs. |
|
for this purpose . In this paper we show
|
how
|
two standard outputs from
<term>
information
|
#279
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
|
characterization of what a
<term>
user model
</term>
is and
|
how
|
it can be used . The types of information
|
#16061
It begins with a characterization of what a user model is and how it can be used. |