|
( as )
</term>
, and
<term>
besides
</term>
.
|
These
|
<term>
words
</term>
appear frequently enough
|
#1846
This paper presents a formal analysis for a large class of words called alternative markers, which includes other (than), such (as), and besides. These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. |
|
models
</term>
of
<term>
WH-questions
</term>
.
|
These
|
<term>
models
</term>
, which are built from
|
#2143
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. These models, which are built from shallow linguistic features of questions, are employed to predict target variables which represent a user's informational goals. |
|
literature on
<term>
machine translation
</term>
.
|
These
|
<term>
models
</term>
can be viewed as pairs
|
#7457
This paper investigates some computational problems associated with probabilistic translation models that have recently been adopted in the literature on machine translation. These models can be viewed as pairs of probabilistic context-free grammars working in a 'synchronous' way. |
|
interconnected sets of
<term>
subpredicates
</term>
.
|
These
|
<term>
subpredicates
</term>
may be thought
|
#11873
In this format, developed by the LNR research group at The University of California at San Diego, verbs are represented as interconnected sets of subpredicates. These subpredicates may be thought of as the almost inevitable inferences that a listener makes when a verb is used in a sentence. |
|
</term>
of the situation being described .
|
These
|
<term>
syntactic and semantic expectations
|
#13055
Our solution to these problems is to make use of expectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word-senses of words with multiple meanings (ambiguity), fill in missing words (elllpsis), and resolve referents (anaphora). |
|
interprets a
<term>
speaker 's utterance
</term>
.
|
These
|
mistakes can lead to various kinds of misunderstandings
|
#14454
Because a speaker and listener cannot be assured to have the same beliefs, contexts, perceptions, backgrounds, or goals, at each point in a conversation, difficulties and mistakes arise when a listener interprets a speaker's utterance. These mistakes can lead to various kinds of misunderstandings between speaker and listener, including reference failures or failure to understand the speaker's intention. |
|
directed graphs
</term>
which satisfy them .
|
These
|
<term>
graphs
</term>
are , in fact ,
<term>
|
#14699
We have developed a model in which descriptions of feature structures can be regarded as logical formulas, and interpreted by sets of directed graphs which satisfy them. These graphs are, in fact, transition graphs for a special type of deterministic finite automaton. |
|
made to the
<term>
SUMMIT recognizer
</term>
.
|
These
|
include
<term>
context-dependent phonetic
|
#18709
This paper describes the status of the MIT ATIS system as of February 1992, focusing especially on the changes made to the SUMMIT recognizer. These include context-dependent phonetic modelling, the use of a bigram language model in conjunction with a probabilistic LR parser, and refinements made to the lexicon. |