|
participants are using . In this presentation , we
|
describe
|
the features of and
<term>
requirements
</term>
|
#258
In this presentation, we describe the features of and requirements for a genuinely useful software infrastructure for this purpose. |
|
a standard
<term>
text browser
</term>
. We
|
describe
|
how this information is used in a
<term>
|
#314
We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function. |
|
with the best
<term>
confidence
</term>
. We
|
describe
|
a three-tiered approach for
<term>
evaluation
|
#1196
We describe a three-tiered approach for evaluation of spoken dialogue systems. |
|
and
<term>
component performance
</term>
. We
|
describe
|
our use of this approach in numerous fielded
|
#1224
We describe our use of this approach in numerous fielded user studies conducted with the U.S. military. |
|
the
<term>
hand-crafted system
</term>
. We
|
describe
|
a set of
<term>
supervised machine learning
|
#2125
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. |
|
using the
<term>
language model
</term>
. We
|
describe
|
a simple
<term>
unsupervised technique
</term>
|
#3154
We describe a simple unsupervised technique for learning morphology by identifying hubs in an automaton. |
|
<term>
NE types
</term>
. In this paper , we
|
describe
|
a
<term>
phrase-based unigram model
</term>
|
#3394
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
|
undisambiguated
<term>
corpus data
</term>
. We
|
describe
|
a new approach which involves clustering
|
#3903
We describe a new approach which involves clustering subcategorization frame (SCF) distributions using the Information Bottleneck and nearest neighbour methods. |
|
evaluating
<term>
WSD programs
</term>
. We
|
describe
|
the ongoing construction of a large ,
<term>
|
#4933
We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the large-scale acquisition of word-semantic information, e.g. the construction of domain-independent lexica. |
|
<term>
translation performance
</term>
. We
|
describe
|
a hierarchy of
<term>
loss functions
</term>
|
#6576
We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. |
|
e.g. ,
<term>
thesis statements
</term>
) . We
|
describe
|
a new system that enhances
<term>
Criterion
|
#6671
We describe a new system that enhances Criterion's capability, by evaluating multiple aspects of coherence in essays. |
|
application of existing
<term>
metrics
</term>
. We
|
describe
|
a
<term>
method
</term>
for identifying systematic
|
#7629
We describe a method for identifying systematic patterns in translation data using part-of-speech tag sequences. |
|
resources
</term>
are available . In this paper we
|
describe
|
a novel
<term>
data structure
</term>
for
<term>
|
#9125
In this paper we describe a novel data structure for phrase-based statistical machine translation which allows for the retrieval of arbitrarily long phrases while simultaneously using less memory than is required by current decoder implementations. |
|
loss in
<term>
translation quality
</term>
. We
|
describe
|
a novel
<term>
approach
</term>
to
<term>
statistical
|
#9202
We describe a novel approach to statistical machine translation that combines syntactic information in the source language with recent advances in phrasal translation. |
|
<term>
tree-based ordering model
</term>
. We
|
describe
|
an efficient
<term>
decoder
</term>
and show
|
#9274
We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser. |
|
<term>
parallel corpora
</term>
. Second , we
|
describe
|
the
<term>
graphical model
</term>
for the
<term>
|
#9477
Second, we describe the graphical model for the machine translation task, which can also be viewed as a stochastic tree-to-tree transducer. |
|
to
<term>
content determination
</term>
. We
|
describe
|
a
<term>
clustering algorithm
</term>
which
|
#10704
We describe a clustering algorithm which is sufficiently general to be applied to these diverse problems, discuss its application, and evaluate its performance. |
|
combine multiple
<term>
tokens
</term>
. We
|
describe
|
the
<term>
tagging strategies
</term>
that
|
#10823
We describe the tagging strategies that can be found in the literature and evaluate their relative performances. |
|
negligible runtime . In this paper , we
|
describe
|
the research using
<term>
machine learning
|
#11205
In this paper, we describe the research using machine learning techniques to build a comma checker to be integrated in a grammar checker for Basque. |
|
a system should provide . We go , on to
|
describe
|
<term>
FlexP
</term>
, a
<term>
bottom-up pattern-matching
|
#12761
We go, on to describe FlexP, a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural language input to a limited-domain computer system. |