|
In this paper , we
present
<term>
SPoT
</term>
, a
<term>
sentence planner
</term>
, and a new methodology for automatically training
<term>
SPoT
</term>
on the basis of
<term>
feedback
</term>
provided by
<term>
human judges
</term>
.
|
#1340
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. |
|
We
present
an
<term>
unsupervised learning algorithm
</term>
for
<term>
identification of paraphrases
</term>
from a
<term>
corpus of multiple English translations
</term>
of the same
<term>
source text
</term>
.
|
#1778
We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. |
|
These
<term>
words
</term>
appear frequently enough in
<term>
dialog
</term>
to warrant serious
<term>
attention
</term>
, yet
present
<term>
natural language search engines
</term>
perform poorly on
<term>
queries
</term>
containing them .
|
#1859
These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. |
|
We
present
our
<term>
multi-level answer resolution algorithm
</term>
that combines results from the
<term>
answering agents
</term>
at the
<term>
question , passage , and/or answer levels
</term>
.
|
#2373
We present our multi-level answer resolution algorithm that combines results from the answering agents at the question, passage, and/or answer levels. |
|
In this paper we
present
<term>
ONTOSCORE
</term>
, a system for scoring sets of
<term>
concepts
</term>
on the basis of an
<term>
ontology
</term>
.
|
#2439
In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. |
|
We
present
an implementation of the
<term>
model
</term>
based on
<term>
finite-state models
</term>
, demonstrate the
<term>
model
</term>
's ability to significantly reduce
<term>
character and word error rate
</term>
, and provide evaluation results involving
<term>
automatic extraction
</term>
of
<term>
translation lexicons
</term>
from
<term>
printed text
</term>
.
|
#2745
We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text. |
|
We
present
an application of
<term>
ambiguity packing and stochastic disambiguation techniques
</term>
for
<term>
Lexical-Functional Grammars ( LFG )
</term>
to the domain of
<term>
sentence condensation
</term>
.
|
#2785
We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG) to the domain of sentence condensation. |
|
We
present
a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including
<term>
jointly conditioning on multiple consecutive words
</term>
, ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
|
#2910
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
|
We
present
a
<term>
syntax-based constraint
</term>
for
<term>
word alignment
</term>
, known as the
<term>
cohesion constraint
</term>
.
|
#3229
We present a syntax-based constraint for word alignment, known as the cohesion constraint. |
|
In this paper we
present
a novel , customizable
<term>
IE paradigm
</term>
that takes advantage of
<term>
predicate-argument structures
</term>
.
|
#3715
In this paper we present a novel, customizable IE paradigm that takes advantage of predicate-argument structures. |
|
We
present
a set of
<term>
features
</term>
designed for
<term>
pronoun resolution
</term>
in
<term>
spoken dialogue
</term>
and determine the most promising
<term>
features
</term>
.
|
#3999
We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. |
|
Based on these results , we
present
an
<term>
ECA
</term>
that uses
<term>
verbal and nonverbal grounding acts
</term>
to update
<term>
dialogue state
</term>
.
|
#5093
Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state. |
|
We
present
a new
<term>
HMM tagger
</term>
that exploits
<term>
context
</term>
on both sides of a
<term>
word
</term>
to be tagged , and evaluate it in both the
<term>
unsupervised and supervised case
</term>
.
|
#5497
We present a new HMM tagger that exploits context on both sides of a word to be tagged, and evaluate it in both the unsupervised and supervised case. |
|
Along the way , we
present
the first comprehensive comparison of
<term>
unsupervised methods for part-of-speech tagging
</term>
, noting that published results to date have not been comparable across
<term>
corpora
</term>
or
<term>
lexicons
</term>
.
|
#5531
Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. |
|
Observing that the quality of the
<term>
lexicon
</term>
greatly impacts the
<term>
accuracy
</term>
that can be achieved by the
<term>
algorithms
</term>
, we
present
a method of
<term>
HMM training
</term>
that improves
<term>
accuracy
</term>
when training of
<term>
lexical probabilities
</term>
is unstable .
|
#5578
Observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |
|
In this paper , we
present
a
<term>
corpus-based supervised word sense disambiguation ( WSD ) system
</term>
for
<term>
Dutch
</term>
which combines
<term>
statistical classification ( maximum entropy )
</term>
with
<term>
linguistic information
</term>
.
|
#5984
In this paper, we present a corpus-based supervised word sense disambiguation (WSD) system for Dutch which combines statistical classification (maximum entropy) with linguistic information. |
|
We
present
a
<term>
text mining method
</term>
for finding
<term>
synonymous expressions
</term>
based on the
<term>
distributional hypothesis
</term>
in a set of coherent
<term>
corpora
</term>
.
|
#6093
We present a text mining method for finding synonymous expressions based on the distributional hypothesis in a set of coherent corpora. |
|
In this paper , we
present
our work on the detection of
<term>
question-answer pairs
</term>
in an
<term>
email conversation
</term>
for the task of
<term>
email summarization
</term>
.
|
#6262
In this paper, we present our work on the detection of question-answer pairs in an email conversation for the task of email summarization. |
|
We
present
a framework for the fast computation of
<term>
lexical affinity models
</term>
.
|
#6309
We present a framework for the fast computation of lexical affinity models. |
|
We
present
<term>
Minimum Bayes-Risk ( MBR ) decoding
</term>
for
<term>
statistical machine translation
</term>
.
|
#6544
We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. |