|
types
</term>
. The paper also discusses
|
how
|
<term>
memory
</term>
is structured in multiple
|
#12022
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
different
<term>
inference types
</term>
, and
|
how
|
the information found in
<term>
memory
</term>
|
#12037
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
Discourse processing
</term>
requires recognizing
|
how
|
the
<term>
utterances
</term>
of the
<term>
discourse
|
#14330
Discourse processing requires recognizing how the utterances of the discourse aggregate into segments, recognizing the intentions expressed in the discourse and the relationships among intentions, and tracking the discourse through the operation of the mechanisms associated with attentional state. |
|
this
<term>
complexity
</term>
, we describe
|
how
|
<term>
disjunctive
</term>
values can be specified
|
#14841
To deal with this complexity, we describe how disjunctive values can be specified in a way which delays expansion to disjunctive normal form. |
|
<term>
monolingual UCG
</term>
, we will show
|
how
|
the two can be integrated , and present
|
#15140
After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bi-directional Engllsh-Spanish fragment. |
|
broad range of
<term>
texts
</term>
to show
|
how
|
the distribution of
<term>
demonstrative
|
#15202
We examine a broad range of texts to show how the distribution of demonstrative forms and functions is genre dependent. |
|
restrictive statements
</term>
. The paper shows
|
how
|
conventional algorithms for the analysis
|
#15307
The paper shows how conventional algorithms for the analysis of context free languages can be adapted to the CCR formalism. |
|
context . We identified two tasks : First ,
|
how
|
<term>
linguistic concepts
</term>
are acquired
|
#15843
First, how linguistic concepts are acquired from training examples and organized in a hierarchy; this task was discussed in previous papers [Zernik87]. |
|
Zernik87 ] . Second , we show in this paper
|
how
|
a
<term>
lexical hierarchy
</term>
is used
|
#15875
Second, we show in this paper how a lexical hierarchy is used in predicting new linguistic concepts. |
|
of what a
<term>
user model
</term>
is and
|
how
|
it can be used . The types of information
|
#16061
It begins with a characterization of what a user model is and how it can be used. |
|
pragmatics processing
</term>
, we describe
|
how
|
the method of
<term>
abductive inference
</term>
|
#17494
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. |
|
set representation
</term>
. We investigate
|
how
|
sets of individually high-precision
<term>
|
#20071
We investigate how sets of individually high-precision rules can result in a low precision when used together, and develop some theory about these probably-correct rules. |
|
particular , we here elaborate on principles of
|
how
|
the
<term>
global behavior
</term>
of a
<term>
|
#21069
In particular, we here elaborate on principles of how the global behavior of a lexically distributed grammar and its corresponding parser can be specified in terms of event type networks and event networks, resp. |
|
based on processing . Finally , it shows
|
how
|
processing accounts can be described formally
|
#21196
Finally, it shows how processing accounts can be described formally and declaratively in terms of Dynamic Grammars. |