other,40-1-N01-1003,bq |
<term>
Sentence planning
</term>
is a set of inter-related but distinct tasks , one of which is
<term>
sentence scoping
</term>
, i.e. the choice of
<term>
syntactic structure
</term>
for elementary
<term>
speech acts
</term>
and the decision of how to combine them into one or more
<term>
sentences
</term>
.
|
#1333
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or moresentences. |
|
The
<term>
stemming model
</term>
is based on
<term>
statistical machine translation
</term>
and it uses an
<term>
English stemmer
</term>
and a small ( 10K
sentences
)
<term>
parallel corpus
</term>
as its sole
<term>
training resources
</term>
.
|
#4466
The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. |
other,9-2-C04-1106,bq |
But
<term>
computational linguists
</term>
seem to be quite dubious about
<term>
analogies between
sentences
</term>
: they would not be enough numerous to be of any use .
|
#5895
But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. |
other,16-3-C04-1106,bq |
We report experiments conducted on a
<term>
multilingual corpus
</term>
to estimate the number of
<term>
analogies
</term>
among the
<term>
sentences
</term>
that it contains .
|
#5925
We report experiments conducted on a multilingual corpus to estimate the number of analogies among thesentences that it contains. |
other,31-3-N04-1022,bq |
We describe a hierarchy of
<term>
loss functions
</term>
that incorporate different levels of
<term>
linguistic information
</term>
from
<term>
word strings
</term>
,
<term>
word-to-word alignments
</term>
from an
<term>
MT system
</term>
, and
<term>
syntactic structure
</term>
from
<term>
parse-trees
</term>
of
<term>
source and target language
sentences
</term>
.
|
#6610
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |
other,10-1-N04-1024,bq |
<term>
CriterionSM Online Essay Evaluation Service
</term>
includes a capability that labels
<term>
sentences
</term>
in student
<term>
writing
</term>
with
<term>
essay-based discourse elements
</term>
( e.g. ,
<term>
thesis statements
</term>
) .
|
#6655
CriterionSM Online Essay Evaluation Service includes a capability that labelssentences in student writing with essay-based discourse elements (e.g., thesis statements). |
other,5-3-N04-1024,bq |
This system identifies
<term>
features
</term>
of
<term>
sentences
</term>
based on
<term>
semantic similarity measures
</term>
and
<term>
discourse structure
</term>
.
|
#6695
This system identifies features ofsentences based on semantic similarity measures and discourse structure. |
other,34-3-I05-2021,bq |
At the same time , the recent improvements in the
<term>
BLEU scores
</term>
of
<term>
statistical machine translation ( SMT )
</term>
suggests that
<term>
SMT models
</term>
are good at predicting the right
<term>
translation
</term>
of the
<term>
words
</term>
in
<term>
source language
sentences
</term>
.
|
#7892
At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences. |
other,10-1-I05-5008,bq |
We propose a
<term>
method
</term>
that automatically generates
<term>
paraphrase
</term>
sets from
<term>
seed
sentences
</term>
to be used as
<term>
reference sets
</term>
in objective
<term>
machine translation evaluation measures
</term>
like
<term>
BLEU
</term>
and
<term>
NIST
</term>
.
|
#8452
We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST. |
other,25-2-I05-5008,bq |
We measured the quality of the
<term>
paraphrases
</term>
produced in an experiment , i.e. , ( i ) their
<term>
grammaticality
</term>
: at least 99 % correct
<term>
sentences
</term>
; ( ii ) their
<term>
equivalence in meaning
</term>
: at least 96 % correct
<term>
paraphrases
</term>
either by
<term>
meaning equivalence
</term>
or
<term>
entailment
</term>
; and , ( iii ) the amount of internal
<term>
lexical and syntactical variation
</term>
in a set of
<term>
paraphrases
</term>
: slightly superior to that of
<term>
hand-produced sets
</term>
.
|
#8495
We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correctsentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets. |
tech,6-1-J05-4003,bq |
We present a novel
<term>
method
</term>
for
<term>
discovering parallel
sentences
</term>
in
<term>
comparable , non-parallel corpora
</term>
.
|
#8991
We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. |
other,12-2-J05-4003,bq |
We train a
<term>
maximum entropy classifier
</term>
that , given a pair of
<term>
sentences
</term>
, can reliably determine whether or not they are
<term>
translations
</term>
of each other .
|
#9010
We train a maximum entropy classifier that, given a pair ofsentences, can reliably determine whether or not they are translations of each other. |
other,13-2-E06-1031,bq |
In many cases though such movements still result in correct or almost correct
<term>
sentences
</term>
.
|
#10352
In many cases though such movements still result in correct or almost correctsentences. |
other,9-4-P06-2059,bq |
By using them , we can automatically extract such
<term>
sentences
</term>
that express opinion .
|
#11450
By using them, we can automatically extract suchsentences that express opinion. |
other,13-5-P06-2059,bq |
In our experiment , the method could construct a
<term>
corpus
</term>
consisting of 126,610
<term>
sentences
</term>
.
|
#11468
In our experiment, the method could construct a corpus consisting of 126,610sentences. |
other,4-2-P06-4011,bq |
In our approach ,
<term>
sentences
</term>
in a given
<term>
abstract
</term>
are analyzed and labeled with a specific
<term>
move
</term>
in light of various
<term>
rhetorical functions
</term>
.
|
#11717
In our approach,sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. |
other,8-2-A88-1001,bq |
<term>
Multimedia answers
</term>
include
<term>
videodisc images
</term>
and heuristically-produced complete
<term>
sentences
</term>
in
<term>
text
</term>
or
<term>
text-to-speech form
</term>
.
|
#14890
Multimedia answers include videodisc images and heuristically-produced completesentences in text or text-to-speech form. |
other,33-4-C88-2086,bq |
By reappraising these insightful counterexamples , the
<term>
inferential theory for natural language presuppositions
</term>
described in / Mercer 1987 , 1988 / gives a simple and straightforward explanation for the
<term>
presuppositional nature
</term>
of these
<term>
sentences
</term>
.
|
#15432
By reappraising these insightful counterexamples, the inferential theory for natural language presuppositions described in /Mercer 1987, 1988/ gives a simple and straightforward explanation for the presuppositional nature of thesesentences. |
other,5-5-C88-2160,bq |
Some examples of
<term>
paraphrasing
</term>
ambiguous
<term>
sentences
</term>
are presented .
|
#15733
Some examples of paraphrasing ambiguoussentences are presented. |
other,18-1-C90-2032,bq |
This paper proposes
<term>
document oriented preference sets ( DoPS )
</term>
for the disambiguation of the
<term>
dependency structure
</term>
of
<term>
sentences
</term>
.
|
#16302
This paper proposes document oriented preference sets(DoPS) for the disambiguation of the dependency structure ofsentences. |