|
The results of a practical
</term><term>
evaluation
</term>
of
this
<term>
method
</term>
on a
<term>
wide coverage English grammar
</term>
are given .
|
#1742
The results of a practical evaluation of this method on a wide coverage English grammar are given. |
|
Our approach yields
<term>
phrasal and single word lexical paraphrases
</term>
as well as
<term>
syntactic paraphrases
</term>
.
This
paper presents a
<term>
formal analysis
</term>
for a large class of
<term>
words
</term>
called
<term>
alternative markers
</term>
, which includes
<term>
other ( than )
</term>
,
<term>
such ( as )
</term>
, and
<term>
besides
</term>
.
|
#1815
Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases. This paper presents a formal analysis for a large class of words called alternative markers, which includes other (than), such (as), and besides. |
|
The value of
this
approach is that as the
<term>
operational semantics
</term>
of
<term>
natural language applications
</term>
improve , even larger improvements are possible .
|
#1906
The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible. |
|
In
this
paper We experimentally evaluate a
<term>
trainable sentence planner
</term>
for a
<term>
spoken dialogue system
</term>
by eliciting
<term>
subjective human judgments
</term>
.
|
#2051
In this paper We experimentally evaluate a trainable sentence planner for a spoken dialogue system by eliciting subjective human judgments. |
|
We report on different aspects of the
<term>
predictive performance
</term>
of our
<term>
models
</term>
, including the influence of various
<term>
training and testing factors
</term>
on
<term>
predictive performance
</term>
, and examine the relationships among the target variables .
This
paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2205
We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables. This paper describes a method for utterance classification that does not require manual transcription of training data. |
|
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with
this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2280
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |
|
In
this
paper we present
<term>
ONTOSCORE
</term>
, a system for scoring sets of
<term>
concepts
</term>
on the basis of an
<term>
ontology
</term>
.
|
#2436
In this paper we present ONTOSCORE, a system for scoring sets of concepts on the basis of an ontology. |
|
In
this
paper , we introduce a
<term>
generative probabilistic optical character recognition ( OCR ) model
</term>
that describes an end-to-end process in the
<term>
noisy channel framework
</term>
, progressing from generation of
<term>
true text
</term>
through its transformation into the
<term>
noisy output
</term>
of an
<term>
OCR system
</term>
.
|
#2668
In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. |
|
In
this
paper , we show how
<term>
training data
</term>
can be supplemented with
<term>
text
</term>
from the
<term>
web
</term>
filtered to match the
<term>
style
</term>
and/or
<term>
topic
</term>
of the target
<term>
recognition task
</term>
, but also that it is possible to get bigger performance gains from the
<term>
data
</term>
by using
<term>
class-dependent interpolation
</term>
of
<term>
N-grams
</term>
.
|
#3029
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
We evaluate the utility of
this
<term>
constraint
</term>
in two different
<term>
algorithms
</term>
.
|
#3264
We evaluate the utility of this constraint in two different algorithms. |
|
A novel
<term>
bootstrapping approach
</term>
to
<term>
Named Entity ( NE ) tagging
</term>
using
<term>
concept-based seeds
</term>
and
<term>
successive learners
</term>
is presented .
This
approach only requires a few
<term>
common noun
</term>
or
<term>
pronoun
</term><term>
seeds
</term>
that correspond to the
<term>
concept
</term>
for the targeted
<term>
NE
</term>
, e.g. he/she/man / woman for
<term>
PERSON NE
</term>
.
|
#3305
A novel bootstrapping approach to Named Entity (NE) tagging using concept-based seeds and successive learners is presented. This approach only requires a few common noun or pronoun seeds that correspond to the concept for the targeted NE, e.g. he/she/man/woman for PERSON NE. |
|
In
this
paper , we describe a
<term>
phrase-based unigram model
</term>
for
<term>
statistical machine translation
</term>
that uses a much simpler set of
<term>
model parameters
</term>
than similar
<term>
phrase-based models
</term>
.
|
#3390
In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. |
|
In
this
paper , we propose a novel
<term>
Cooperative Model
</term>
for
<term>
natural language understanding
</term>
in a
<term>
dialogue system
</term>
.
|
#3478
In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. |
|
We build
this
based on both
<term>
Finite State Model ( FSM )
</term>
and
<term>
Statistical Learning Model ( SLM )
</term>
.
|
#3498
We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). |
|
In
this
paper we present a novel , customizable
<term>
IE paradigm
</term>
that takes advantage of
<term>
predicate-argument structures
</term>
.
|
#3712
In this paper we present a novel, customizable IE paradigm that takes advantage of predicate-argument structures. |
|
The experimental results prove our claim that accurate
<term>
predicate-argument structures
</term>
enable high quality
<term>
IE
</term>
results .
This
paper proposes the
<term>
Hierarchical Directed Acyclic Graph ( HDAG ) Kernel
</term>
for
<term>
structured natural language data
</term>
.
|
#3789
The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results. This paper proposes the Hierarchical Directed Acyclic Graph (HDAG) Kernel for structured natural language data. |
|
In
this
paper we formulate
<term>
story link detection
</term>
and
<term>
new event detection
</term>
as
<term>
information retrieval task
</term>
and hypothesize on the impact of
<term>
precision
</term>
and
<term>
recall
</term>
on both
<term>
systems
</term>
.
|
#4064
In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both systems. |
|
Experimental results validate our hypothesis .
This
paper concerns the
<term>
discourse understanding process
</term>
in
<term>
spoken dialogue systems
</term>
.
|
#4126
Experimental results validate our hypothesis. This paper concerns the discourse understanding process in spoken dialogue systems. |
|
This paper concerns the
<term>
discourse understanding process
</term>
in
<term>
spoken dialogue systems
</term>
.
This
process enables the
<term>
system
</term>
to understand
<term>
user utterances
</term>
based on the
<term>
context
</term>
of a
<term>
dialogue
</term>
.
|
#4138
This paper concerns the discourse understanding process in spoken dialogue systems. This process enables the system to understand user utterances based on the context of a dialogue. |
|
By holding multiple
<term>
candidates
</term>
for
<term>
understanding
</term>
results and resolving the
<term>
ambiguity
</term>
as the
<term>
dialogue
</term>
progresses , the
<term>
discourse understanding accuracy
</term>
can be improved .
This
paper proposes a method for resolving this
<term>
ambiguity
</term>
based on
<term>
statistical information
</term>
obtained from
<term>
dialogue corpora
</term>
.
|
#4216
By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. This paper proposes a method for resolving this ambiguity based on statistical information obtained from dialogue corpora. |