translation output </term> . Subjects were given a set of up to six extracts of <term> translated
</term> . <term> Sentence planning </term> is a set of inter-related but distinct tasks , one
hand-crafted system </term> . We describe a set of <term> supervised machine learning </term>
translation </term> that uses a much simpler set of <term> model parameters </term> than similar
</term> . It is based on : ( 1 ) an extended set of <term> features </term> ; and ( 2 ) <term>
non-NP-antecedents </term> . We present a set of <term> features </term> designed for <term>
is more comprehensive . Specifically , we set up three dimensions of <term> user models
</term> in <term> dialogue </term> . We extract a set of <term> heuristic principles </term> from
</term> of such <term> clauses </term> to create a set of <term> domain independent features </term>
<term> distributional hypothesis </term> in a set of coherent <term> corpora </term> . This paper
</term> , a <term> topic signature </term> is a set of <term> words </term> that tend to co-occur
able , after attending this workshop , to set out building an <term> SMT system </term> themselves
lexical and syntactical variation </term> in a set of <term> paraphrases </term> : slightly superior
</term> . The base <term> parser </term> produces a set of <term> candidate parses </term> for each
</term> to be represented as an arbitrary set of <term> features </term> , without concerns
extractio and ranking methods </term> using a set of <term> manual word alignments </term> ,
for themselves . In this paper we study a set of problems that are of considerable importance
indifference . In this paper , we outline a set of <term> parsing flexibilities </term> that
lr,22-4-H90-1060,bq standard <term> grammar </term> and <term> test set </term> from the <term> DARPA Resource Management
building the <term> editor </term> was to define a set of <term> coherence rules </term> that could
hide detail