|
Zernik87 ] . Second , we show in this paper
|
how
|
a
<term>
lexical hierarchy
</term>
is used
|
#15875
Second, we show in this paper how a lexical hierarchy is used in predicting new linguistic concepts. |
|
that
<term>
users
</term>
need by analyzing
|
how
|
a
<term>
user
</term>
interacts with a system
|
#11680
FERRET utilizes a novel approach to Q/A known as predictive questioning which attempts to identify the questions (and answers) that users need by analyzing how a user interacts with a system while gathering information related to a particular scenario. |
|
documentation . The question is , however ,
|
how
|
an interesting information piece would
|
#42
The question is, however, how an interesting information piece would be found in a large database. |
|
restrictive statements
</term>
. The paper shows
|
how
|
conventional algorithms for the analysis
|
#15307
The paper shows how conventional algorithms for the analysis of context free languages can be adapted to the CCR formalism. |
|
</term>
, the
<term>
theory
</term>
specifies
|
how
|
different information in
<term>
memory
</term>
|
#11951
Unlike logic, the theory specifies how different information in memory affects the certainty of the conclusions drawn. |
|
this
<term>
complexity
</term>
, we describe
|
how
|
<term>
disjunctive
</term>
values can be specified
|
#14841
To deal with this complexity, we describe how disjunctive values can be specified in a way which delays expansion to disjunctive normal form. |
|
</term>
in
<term>
English
</term>
. We demonstrate
|
how
|
errors in the
<term>
machine translations
|
#7222
We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases. |
|
translation probabilities
</term>
, and show
|
how
|
it can be refined to take
<term>
contextual
|
#9739
We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. |
|
characterization of what a
<term>
user model
</term>
is and
|
how
|
it can be used . The types of information
|
#16061
It begins with a characterization of what a user model is and how it can be used. |
|
</term>
. The demonstration will focus on
|
how
|
<term>
JAVELIN
</term>
processes
<term>
questions
|
#3667
The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus. |
|
context . We identified two tasks : First ,
|
how
|
<term>
linguistic concepts
</term>
are acquired
|
#15843
First, how linguistic concepts are acquired from training examples and organized in a hierarchy; this task was discussed in previous papers [Zernik87]. |
|
inference types
</term>
. The paper also discusses
|
how
|
<term>
memory
</term>
is structured in multiple
|
#12022
The paper also discusses how memory is structured in multiple ways to support the different inference types, and how the information found in memory determines which inference types are triggered. |
|
translation systems
</term>
, and demonstrate
|
how
|
our application can be used by
<term>
developers
|
#7662
We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems, and demonstrate how our application can be used by developers to explore patterns in machine translation output. |
|
statistical machine translation
</term>
, we show
|
how
|
<term>
paraphrases
</term>
in one
<term>
language
|
#9698
Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. |
|
based on processing . Finally , it shows
|
how
|
processing accounts can be described formally
|
#21196
Finally, it shows how processing accounts can be described formally and declaratively in terms of Dynamic Grammars. |
|
context of
<term>
dialog systems
</term>
. We show
|
how
|
research in
<term>
generation
</term>
can be
|
#997
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
|
array-based data structure
</term>
. We show
|
how
|
<term>
sampling
</term>
can be used to reduce
|
#9180
We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. |
|
set representation
</term>
. We investigate
|
how
|
sets of individually high-precision
<term>
|
#20071
We investigate how sets of individually high-precision rules can result in a low precision when used together, and develop some theory about these probably-correct rules. |
|
time
</term>
. Furthermore , we will show
|
how
|
some
<term>
evaluation measures
</term>
can
|
#10390
Furthermore, we will show how some evaluation measures can be improved by the introduction of word-dependent substitution costs. |
|
broad range of
<term>
texts
</term>
to show
|
how
|
the distribution of
<term>
demonstrative
|
#15202
We examine a broad range of texts to show how the distribution of demonstrative forms and functions is genre dependent. |