|
presuppositions
</term>
that is presented
|
in
|
/ Gazdar 1979 / . / Soames 1982 / provides
|
#15363
/Soames 1979/ provides some counterexamples to the theory of natural language presuppositions that is presented in /Gazdar 1979/. |
|
language presuppositions
</term>
described
|
in
|
/ Mercer 1987 , 1988 / gives a simple and
|
#15413
By reappraising these insightful counterexamples, the inferential theory for natural language presuppositions described in /Mercer 1987, 1988/ gives a simple and straightforward explanation for the presuppositional nature of these sentences. |
|
Mercer 1987 / rejects the solution found
|
in
|
/ Soames 1982 / leaving these counterexamples
|
#15389
/Mercer 1987/ rejects the solution found in /Soames 1982/ leaving these counterexamples unexplained. |
|
<term>
resource-frugal approach
</term>
results
|
in
|
87.5 %
<term>
agreement
</term>
with a state
|
#4536
Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component. |
|
probabilistic context-free grammars
</term>
working
|
in
|
a ' synchronous ' way . Two
<term>
hardness
|
#7469
These models can be viewed as pairs of probabilistic context-free grammars working in a 'synchronous' way. |
|
roughly corresponding to
<term>
arcs
</term>
|
in
|
a
<term>
chart
</term>
.
<term>
Chart-like parsing
|
#20912
It is also a drastic generalization of chart Parsing, partial instantiation of clauses in a program roughly corresponding to arcsin a chart. |
|
determines that a
<term>
homophone
</term>
is misused
|
in
|
a
<term>
compound noun
</term>
if one or both
|
#20493
The method accurately determines that a homophone is misused in a compound noun if one or both of its neighbors is not a member of the semantic set defined by the homophone. |
|
</term>
, the
<term>
theory
</term>
is expressed
|
in
|
a
<term>
content-independent formalism
</term>
|
#11940
Like logic, the theory is expressed in a content-independent formalism. |
|
</term>
, or
<term>
goals
</term>
, at each point
|
in
|
a
<term>
conversation
</term>
, difficulties
|
#14437
Because a speaker and listener cannot be assured to have the same beliefs, contexts, perceptions, backgrounds, or goals, at each point in a conversation, difficulties and mistakes arise when a listener interprets a speaker's utterance. |
|
control the
<term>
top-down derivation
</term>
|
in
|
a declarative way . This
<term>
generation
|
#16241
The system utilizes typed feature structures to control the top-down derivationin a declarative way. |
|
<term>
natural language understanding
</term>
|
in
|
a
<term>
dialogue system
</term>
. We build
|
#3491
In this paper, we propose a novel Cooperative Model for natural language understandingin a dialogue system. |
|
the processing of
<term>
utterances
</term>
|
in
|
a
<term>
discourse
</term>
.
<term>
Discourse
|
#14322
This theory provides a framework for describing the processing of utterancesin a discourse. |
|
that , it successfully classifies 73.2 %
|
in
|
a
<term>
German corpus
</term>
of 2.284
<term>
|
#2518
An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%). |
|
In our approach ,
<term>
sentences
</term>
|
in
|
a given
<term>
abstract
</term>
are analyzed
|
#11718
In our approach, sentencesin a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. |
|
<term>
comma checker
</term>
to be integrated
|
in
|
a
<term>
grammar checker
</term>
for
<term>
Basque
|
#11220
In this paper, we describe the research using machine learning techniques to build a comma checker to be integrated in a grammar checker for Basque. |
|
, a
<term>
hub
</term>
is a
<term>
node
</term>
|
in
|
a
<term>
graph
</term>
with
<term>
in-degree
</term>
|
#3178
For our purposes, a hub is a nodein a graph with in-degree greater than one and out-degree greater than one. |
|
<term>
training examples
</term>
and organized
|
in
|
a
<term>
hierarchy
</term>
; this task was
|
#15853
First, how linguistic concepts are acquired from training examples and organized in a hierarchy; this task was discussed in previous papers [Zernik87]. |
|
wrongly substituted , deleted or inserted
|
in
|
a
<term>
Japanese bunsetsu
</term>
and an
<term>
|
#20660
In order to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese bunsetsu and an English word, and to correct these errors, this paper proposes new methods using m-th order Markov chain model for Japanese kanji-kana characters and English alphabets, assuming that Markov probability of a correct chain of syllables or kanji-kana characters is greater than that of erroneous chains. |
|
</term>
on
<term>
co-occurrence patterns
</term>
|
in
|
a large
<term>
corpus
</term>
. To a large
|
#16628
This paper presents an automatic scheme for collecting statistics on co-occurrence patternsin a large corpus. |
|
interesting information piece would be found
|
in
|
a
<term>
large database
</term>
. Traditional
|
#50
The question is, however, how an interesting information piece would be found in a large database. |