translation output
</term>
. Subjects were given a
set
of up to six extracts of translated newswire
#686Subjects were given a set of up to six extracts of translated newswire text.
</term>
.
<term>
Sentence planning
</term>
is a
set
of inter-related but distinct tasks , one
#1297Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences.
hand-crafted system
</term>
. We describe a
set
of
<term>
supervised machine learning experiments
#2128We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions.
<term>
ONTOSCORE
</term>
, a system for scoring
sets
of
<term>
concepts
</term>
on the basis of
#2447In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology.
translation
</term>
that uses a much simpler
set
of
<term>
model parameters
</term>
than similar
#3409In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models.
</term>
. It is based on : ( 1 ) an extended
set
of
<term>
features
</term>
; and ( 2 )
<term>
#3761It is based on: (1) an extended set of features; and (2) inductive decision tree learning.
non-NP-antecedents
</term>
. We present a
set
of
<term>
features
</term>
designed for
<term>
#4002We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features.
is more comprehensive . Specifically , we
set
up three
<term>
dimensions
</term>
of
<term>
#4327Specifically, we set up three dimensions of user models: skill level to the system, knowledge level on the target domain and the degree of hastiness.
able , after attending this workshop , to
set
out building an
<term>
SMT system
</term>
themselves
#6837Participants should be able, after attending this workshop, to set out building an SMT system themselves and achieving good baseline results in a short time.
one
<term>
monolingual wordnet
</term>
and a
set
of
<term>
cross-lingual lexical semantic
#7121In this paper, we propose and implement a model for bootstrapping parallel wordnets based on one monolingual wordnet and a set of cross-lingual lexical semantic relations.
relations
</term>
. In particular , we propose a
set
of
<term>
inference rules
</term>
to predict
#7134In particular, we propose a set of inference rules to predict Chinese wordnet structure based on English wordnet and English-Chinese translation relations.
other,7-1-I05-5008,ak
automatically generates
<term>
paraphrase
sets
</term>
from
<term>
seed sentences
</term>
to
#7569We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST.
other,16-1-I05-5008,ak
sentences
</term>
to be used as
<term>
reference
sets
</term>
in
<term>
objective machine translation
#7578We propose a method that automatically generates paraphrase sets from seed sentences to be used as reference sets in objective machine translation evaluation measures like BLEU and NIST.
lexical and syntactical variation
</term>
in a
set
of
<term>
paraphrases
</term>
: slightly superior
#7653We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets.
slightly superior to that of hand-produced
sets
. The
<term>
paraphrase sets
</term>
produced
#7663We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets.
other,1-3-I05-5008,ak
hand-produced sets . The
<term>
paraphrase
sets
</term>
produced by this method thus seem
#7667The paraphrase sets produced by this method thus seem adequate as reference sets to be used for MT evaluation.
other,11-3-I05-5008,ak
method thus seem adequate as
<term>
reference
sets
</term>
to be used for
<term>
MT evaluation
#7677The paraphrase sets produced by this method thus seem adequate as reference sets to be used for MT evaluation.
</term>
. The
<term>
base parser
</term>
produces a
set
of
<term>
candidate parses
</term>
for each
#8033The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses.
</term>
to be represented as an arbitrary
set
of
<term>
features
</term>
, without concerns
#8092The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
extraction and ranking methods
</term>
using a
set
of
<term>
manual word alignments
</term>
,
#10250We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.