other,40-1-N01-1003,bq |
how to combine them into one or more
<term>
|
sentences
|
</term>
. In this paper , we present
<term>
|
#1333
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or moresentences. |
|
<term>
English stemmer
</term>
and a small ( 10K
|
sentences
|
)
<term>
parallel corpus
</term>
as its sole
|
#4466
The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. |
other,9-2-C04-1106,bq |
quite dubious about
<term>
analogies between
|
sentences
|
</term>
: they would not be enough numerous
|
#5895
But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. |
other,31-3-N04-1022,bq |
</term>
of
<term>
source and target language
|
sentences
|
</term>
. We report the performance of the
|
#6610
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |
other,10-1-N04-1024,bq |
</term>
includes a capability that labels
<term>
|
sentences
|
</term>
in student
<term>
writing
</term>
with
|
#6655
CriterionSM Online Essay Evaluation Service includes a capability that labelssentences in student writing with essay-based discourse elements (e.g., thesis statements). |
other,34-3-I05-2021,bq |
<term>
words
</term>
in
<term>
source language
|
sentences
|
</term>
. Surprisingly however , the
<term>
|
#7892
At the same time, the recent improvements in the BLEU scores of statistical machine translation (SMT) suggests that SMT models are good at predicting the right translation of the words in source language sentences. |
other,10-1-I05-5008,bq |
<term>
paraphrase
</term>
sets from
<term>
seed
|
sentences
|
</term>
to be used as
<term>
reference sets
|
#8452
We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST. |
tech,6-1-J05-4003,bq |
method
</term>
for
<term>
discovering parallel
|
sentences
|
</term>
in
<term>
comparable , non-parallel
|
#8991
We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. |
other,13-2-E06-1031,bq |
result in correct or almost correct
<term>
|
sentences
|
</term>
. In this paper , we will present
|
#10352
In many cases though such movements still result in correct or almost correctsentences. |
other,9-4-P06-2059,bq |
we can automatically extract such
<term>
|
sentences
|
</term>
that express opinion . In our experiment
|
#11450
By using them, we can automatically extract suchsentences that express opinion. |
other,4-2-P06-4011,bq |
articles
</term>
. In our approach ,
<term>
|
sentences
|
</term>
in a given
<term>
abstract
</term>
are
|
#11717
In our approach,sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. |
other,8-2-A88-1001,bq |
and heuristically-produced complete
<term>
|
sentences
|
</term>
in
<term>
text
</term>
or
<term>
text-to-speech
|
#14890
Multimedia answers include videodisc images and heuristically-produced completesentences in text or text-to-speech form. |
other,33-4-C88-2086,bq |
presuppositional nature
</term>
of these
<term>
|
sentences
|
</term>
. We have developed a
<term>
computational
|
#15432
By reappraising these insightful counterexamples, the inferential theory for natural language presuppositions described in /Mercer 1987, 1988/ gives a simple and straightforward explanation for the presuppositional nature of thesesentences. |
other,5-5-C88-2160,bq |
<term>
paraphrasing
</term>
ambiguous
<term>
|
sentences
|
</term>
are presented . Computer programs
|
#15733
Some examples of paraphrasing ambiguoussentences are presented. |
other,18-1-C90-2032,bq |
<term>
dependency structure
</term>
of
<term>
|
sentences
|
</term>
. The
<term>
DoPS system
</term>
extracts
|
#16302
This paper proposes document oriented preference sets(DoPS) for the disambiguation of the dependency structure ofsentences. |
other,12-5-C90-3063,bq |
<term>
pronoun
</term><term>
it
</term>
in
<term>
|
sentences
|
</term>
that were randomly selected from
|
#16682
An experiment was performed to resolve references of the pronoun it insentences that were randomly selected from the corpus. |
lr,3-3-H92-1026,bq |
way . We use a
<term>
corpus of bracketed
|
sentences
|
</term>
, called a
<term>
Treebank
</term>
,
|
#18949
We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. |
other,24-4-H92-1060,bq |
of robustly parsed vs. fully parsed
<term>
|
sentences
|
</term>
on the
<term>
October '91 dry-run test
|
#19459
We have assessed the degree of success of the robust parsing mechanism through a breakdown of the performance of robustly parsed vs. fully parsedsentences on the October '91 dry-run test set. |
other,7-1-A94-1007,bq |
propose a model for analyzing
<term>
English
|
sentences
|
</term>
including
<term>
coordinate conjunctions
|
#19683
The authors propose a model for analyzing English sentences including coordinate conjunctions such as and, or, but and the equivalent words. |