|
translation output
</term>
. Subjects were given a
|
set
|
of up to six extracts of
<term>
translated
|
#686
Subjects were given a set of up to six extracts of translated newswire text. |
|
</term>
.
<term>
Sentence planning
</term>
is a
|
set
|
of inter-related but distinct tasks , one
|
#1297
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
|
hand-crafted system
</term>
. We describe a
|
set
|
of
<term>
supervised machine learning
</term>
|
#2127
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. |
|
translation
</term>
that uses a much simpler
|
set
|
of
<term>
model parameters
</term>
than similar
|
#3408
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
|
</term>
. It is based on : ( 1 ) an extended
|
set
|
of
<term>
features
</term>
; and ( 2 )
<term>
|
#3760
It is based on: (1) an extended set of features; and (2) inductive decision tree learning. |
|
non-NP-antecedents
</term>
. We present a
|
set
|
of
<term>
features
</term>
designed for
<term>
|
#4001
We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. |
|
is more comprehensive . Specifically , we
|
set
|
up three dimensions of
<term>
user models
|
#4325
Specifically, we set up three dimensions of user models: skill level to the system, knowledge level on the target domain and the degree of hastiness. |
|
</term>
in
<term>
dialogue
</term>
. We extract a
|
set
|
of
<term>
heuristic principles
</term>
from
|
#5166
We extract a set of heuristic principles from a corpus-based sample and formulate them as probabilistic Horn clauses. |
|
</term>
of such
<term>
clauses
</term>
to create a
|
set
|
of
<term>
domain independent features
</term>
|
#5193
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
|
<term>
distributional hypothesis
</term>
in a
|
set
|
of coherent
<term>
corpora
</term>
. This paper
|
#6109
We present a text mining method for finding synonymous expressions based on the distributional hypothesis in a set of coherent corpora. |
|
</term>
, a
<term>
topic signature
</term>
is a
|
set
|
of
<term>
words
</term>
that tend to co-occur
|
#6919
Given a particular concept, or word sense, a topic signature is a set of words that tend to co-occur with it. |
|
able , after attending this workshop , to
|
set
|
out building an
<term>
SMT system
</term>
themselves
|
#8087
Participants should be able, after attending this workshop, to set out building an SMT system themselves and achieving good baseline results in a short time. |
|
lexical and syntactical variation
</term>
in a
|
set
|
of
<term>
paraphrases
</term>
: slightly superior
|
#8533
We measured the quality of the paraphrases produced in an experiment, i.e., (i) their grammaticality: at least 99% correct sentences; (ii) their equivalence in meaning: at least 96% correct paraphrases either by meaning equivalence or entailment; and, (iii) the amount of internal lexical and syntactical variation in a set of paraphrases: slightly superior to that of hand-produced sets. |
|
</term>
. The base
<term>
parser
</term>
produces a
|
set
|
of
<term>
candidate parses
</term>
for each
|
#8668
The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. |
|
</term>
to be represented as an arbitrary
|
set
|
of
<term>
features
</term>
, without concerns
|
#8727
The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. |
|
extractio and ranking methods
</term>
using a
|
set
|
of
<term>
manual word alignments
</term>
,
|
#9761
We evaluate our paraphrase extractio and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments. |
|
for themselves . In this paper we study a
|
set
|
of problems that are of considerable importance
|
#9928
In this paper we study a set of problems that are of considerable importance to Statistical Machine Translation (SMT) but which have not been addressed satisfactorily by the SMT research community. |
|
indifference . In this paper , we outline a
|
set
|
of
<term>
parsing flexibilities
</term>
that
|
#12745
In this paper, we outline a set of parsing flexibilities that such a system should provide. |
lr,22-4-H90-1060,bq |
standard
<term>
grammar
</term>
and
<term>
test
|
set
|
</term>
from the
<term>
DARPA Resource Management
|
#17093
With only 12 training speakers for SI recognition, we achieved a 7.5% word error rate on a standard grammar and test set from the DARPA Resource Management corpus. |
|
building the
<term>
editor
</term>
was to define a
|
set
|
of
<term>
coherence rules
</term>
that could
|
#17305
Our most important task in building the editor was to define a set of coherence rules that could be computationally applied to ensure the validity of lexical entries. |