|
a
<term>
trainable sentence planner
</term>
|
for
|
a
<term>
spoken dialogue system
</term>
by
|
#2060
In this paper We experimentally evaluate a trainable sentence plannerfor a spoken dialogue system by eliciting subjective human judgments. |
|
variables . This paper describes a method
|
for
|
<term>
utterance classification
</term>
that
|
#2210
This paper describes a method for utterance classification that does not require manual transcription of training data. |
|
to train a
<term>
phone n-gram model
</term>
|
for
|
a particular
<term>
domain
</term>
; the
<term>
|
#2270
In our method, unsupervised training is first used to train a phone n-gram modelfor a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |
|
different
<term>
answering agents
</term>
searching
|
for
|
<term>
answers
</term>
in multiple
<term>
corpora
|
#2346
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
|
present
<term>
ONTOSCORE
</term>
, a system
|
for
|
scoring sets of
<term>
concepts
</term>
on
|
#2444
In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. |
|
</term>
. Our empirical results , which hold
|
for
|
all examined
<term>
language pairs
</term>
|
#2594
Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
|
</term>
. The
<term>
model
</term>
is designed
|
for
|
use in
<term>
error correction
</term>
, with
|
#2716
The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
|
systems
</term>
in order to make it more useful
|
for
|
<term>
NLP tasks
</term>
. We present an implementation
|
#2740
The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. |
|
stochastic disambiguation techniques
</term>
|
for
|
<term>
Lexical-Functional Grammars ( LFG
|
#2795
We present an application of ambiguity packing and stochastic disambiguation techniquesfor Lexical-Functional Grammars (LFG) to the domain of sentence condensation. |
|
<term>
linguistic parser/generator
</term>
|
for
|
<term>
LFG
</term>
, a
<term>
transfer component
|
#2814
Our system incorporates a linguistic parser/generatorfor LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. |
|
</term>
, a
<term>
transfer component
</term>
|
for
|
<term>
parse reduction
</term>
operating on
|
#2820
Our system incorporates a linguistic parser/generator for LFG, a transfer componentfor parse reduction operating on packed parse forests, and a maximum-entropy model for stochastic output selection. |
|
and a
<term>
maximum-entropy model
</term>
|
for
|
<term>
stochastic output selection
</term>
|
#2833
Our system incorporates a linguistic parser/generator for LFG, a transfer component for parse reduction operating on packed parse forests, and a maximum-entropy modelfor stochastic output selection. |
|
standard
<term>
parser evaluation methods
</term>
|
for
|
automatically evaluating the
<term>
summarization
|
#2849
Furthermore, we propose the use of standard parser evaluation methodsfor automatically evaluating the summarization quality of sentence condensation systems. |
|
Sources of
<term>
training data
</term>
suitable
|
for
|
<term>
language modeling
</term>
of
<term>
conversational
|
#3019
Sources of training data suitable for language modeling of conversational speech are limited. |
|
simple
<term>
unsupervised technique
</term>
|
for
|
learning
<term>
morphology
</term>
by identifying
|
#3159
We describe a simple unsupervised techniquefor learning morphology by identifying hubs in an automaton. |
|
present a
<term>
syntax-based constraint
</term>
|
for
|
<term>
word alignment
</term>
, known as the
|
#3233
We present a syntax-based constraintfor word alignment, known as the cohesion constraint. |
|
that correspond to the
<term>
concept
</term>
|
for
|
the targeted
<term>
NE
</term>
, e.g. he/she/man
|
#3321
This approach only requires a few common noun or pronoun seeds that correspond to the conceptfor the targeted NE, e.g. he/she/man/woman for PERSON NE. |
|
<term>
NE
</term>
, e.g. he/she/man / woman
|
for
|
<term>
PERSON NE
</term>
. The
<term>
bootstrapping
|
#3330
This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted NE, e.g. he/she/man/woman for PERSON NE. |
|
approaches
<term>
supervised NE
</term>
performance
|
for
|
some
<term>
NE types
</term>
. In this paper
|
#3384
The resulting NE system approaches supervised NE performance for some NE types. |
|
a
<term>
phrase-based unigram model
</term>
|
for
|
<term>
statistical machine translation
</term>
|
#3399
In this paper, we describe a phrase-based unigram modelfor statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |