other,10-7-N01-1003,bq |
<term>
SPR
</term>
learns to select a
<term>
|
sentence
|
plan
</term>
whose
<term>
rating
</term>
on average
|
#1443
We show that the trained SPR learns to select asentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
other,21-2-C88-2132,bq |
</term>
and makes sense of as much of the
<term>
|
sentence
|
</term>
as it is actually possible , and
|
#15549
We shall introduce the concept of a chart that works outward from islands and makes sense of as much of thesentence as it is actually possible, and after that will lead to predictions of missing fragments. |
tech,18-3-N03-1026,bq |
<term>
summarization
</term>
quality of
<term>
|
sentence
|
condensation systems
</term>
. An
<term>
experimental
|
#2856
Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality ofsentence condensation systems. |
tech,38-1-C04-1128,bq |
relation to one made previously ,
<term>
|
sentence
|
extraction
</term>
may not capture the necessary
|
#6240
While sentence extraction as an approach to summarization has been shown to work in documents of certain genres, because of the conversational nature of email communication where utterances are made in relation to one made previously,sentence extraction may not capture the necessary segments of dialogue that would make a summary coherent. |
other,10-4-P84-1034,bq |
sentence structure
</term>
and
<term>
English
|
sentence
|
structure
</term>
, which is vital to
<term>
|
#13304
Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation are also discussed together with various interesting ambiguities. |
measure(ment),16-3-H92-1016,bq |
reduce the
<term>
speech recognition word and
|
sentence
|
error rates
</term>
by a factor of 2.5 and
|
#18757
Together with the use of a larger training set, these modifications combined to reduce the speech recognition word and sentence error rates by a factor of 2.5 and 1.6, respectively, on the October '91 test set. |
other,12-2-J05-1003,bq |
candidate parses
</term>
for each input
<term>
|
sentence
|
</term>
, with associated
<term>
probabilities
|
#8675
The base parser produces a set of candidate parses for each inputsentence, with associated probabilities that define an initial ranking of these parses. |
other,36-3-H92-1026,bq |
the correct
<term>
parse
</term>
of a
<term>
|
sentence
|
</term>
. This stands in contrast to the
|
#18979
We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of asentence. |
other,20-2-J86-1002,bq |
evidence from the input shows the current
<term>
|
sentence
|
</term>
is not expected . A
<term>
dialogue
|
#14034
Error correction is done by strongly biasing parsing toward expected meanings unless clear evidence from the input shows the currentsentence is not expected. |
tech,4-1-C90-3046,bq |
interpretation . This paper proposes that
<term>
|
sentence
|
analysis
</term>
should be treated as
<term>
|
#16561
This paper proposes thatsentence analysis should be treated as defeasible reasoning, and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige, which is a formalization of defeasible reasoning, that includes arguments and defeat rules that capture defeasibility. |
other,12-2-H90-1016,bq |
computation needed to compute the
<term>
N-Best
|
sentence
|
hypotheses
</term>
. To avoid
<term>
grammar
|
#16900
We describe algorithms that greatly reduce the computation needed to compute the N-Best sentence hypotheses. |
other,47-3-H92-1060,bq |
the full
<term>
meaning
</term>
of the
<term>
|
sentence
|
</term>
. We have assessed the degree of
|
#19433
Robust parsing is applied only after a full analysis has failed, and it involves the two stages of 1) parsing a set of phrases and clauses, and 2) gluing them together to obtain a single semantic frame encoding the full meaning of thesentence. |
tech,13-1-P05-3025,bq |
the process
</term>
of
<term>
translating a
|
sentence
|
</term>
. The
<term>
method
</term>
allows a
<term>
|
#9846
This paper describes a method of interactively visualizing and directing the process of translating a sentence. |
other,18-4-N01-1003,bq |
potentially large list of possible
<term>
|
sentence
|
plans
</term>
for a given
<term>
text-plan
|
#1392
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possiblesentence plans for a given text-plan input. |
tech,15-1-N01-1003,bq |
but distinct tasks , one of which is
<term>
|
sentence
|
scoping
</term>
, i.e. the choice of
<term>
|
#1308
Sentence planning is a set of inter-related but distinct tasks, one of which issentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
other,17-1-H94-1014,bq |
long distance constraints
</term>
in a
<term>
|
sentence
|
</term>
or
<term>
paragraph
</term>
. The
<term>
|
#21228
This paper introduces a simple mixture language model that attempts to capture long distance constraints in asentence or paragraph. |
other,36-1-C86-1081,bq |
the
<term>
global meaning
</term>
of a
<term>
|
sentence
|
</term>
, even if not in a precise way .
|
#13862
Determiners play an important role in conveying the meaning of an utterance, but they have often been disregarded, perhaps because it seemed more important to devise methods to grasp the global meaning of asentence, even if not in a precise way. |
other,14-3-P05-1034,bq |
dependency parse
</term>
onto the target
<term>
|
sentence
|
</term>
, extract
<term>
dependency treelet
|
#9258
We align a parallel corpus, project the source dependency parse onto the targetsentence, extract dependency treelet translation pairs, and train a tree-based ordering model. |
tech,19-1-C90-3046,bq |
presents such a treatment for
<term>
Japanese
|
sentence
|
analyses
</term>
using an
<term>
argumentation
|
#16577
This paper proposes that sentence analysis should be treated as defeasible reasoning, and presents such a treatment for Japanese sentence analyses using an argumentation system by Konolige, which is a formalization of defeasible reasoning, that includes arguments and defeat rules that capture defeasibility. |
other,17-3-A94-1017,bq |
Translation )
</term>
, that translates a
<term>
|
sentence
|
</term>
utilizing examples effectively and
|
#20245
We have already proposed a model, TDMT (Transfer-Driven Machine Translation), that translates asentence utilizing examples effectively and performs accurate structural disambiguation and target word selection. |