|
We believe
that
these
<term>
evaluation techniques
</term>
will provide information about both the
<term>
human language learning process
</term>
, the
<term>
translation process
</term>
and the
<term>
development
</term>
of
<term>
machine translation systems
</term>
.
|
#583
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. |
|
A
<term>
language learning experiment
</term>
showed
that
<term>
assessors
</term>
can differentiate
<term>
native from non-native language essays
</term>
in less than 100
<term>
words
</term>
.
|
#633
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
|
We integrate a
<term>
spoken language understanding system
</term>
with
<term>
intelligent mobile agents
</term>
that
mediate between
<term>
users
</term>
and
<term>
information sources
</term>
.
|
#806
We integrate a spoken language understanding system with intelligent mobile agentsthat mediate between users and information sources. |
|
We find
that
simple
<term>
interpolation methods
</term>
, like
<term>
log-linear and linear interpolation
</term>
, improve the
<term>
performance
</term>
but fall short of the
<term>
performance
</term>
of an
<term>
oracle
</term>
.
|
#1046
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. |
|
We provide experimental results
that
clearly show the need for a
<term>
dynamic language model combination
</term>
to improve the
<term>
performance
</term>
further .
|
#1135
We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. |
|
We suggest a method
that
mimics the behavior of the
<term>
oracle
</term>
using a
<term>
neural network
</term>
or a
<term>
decision tree
</term>
.
|
#1156
We suggest a method that mimics the behavior of the oracle using a neural network or a decision tree. |
|
We show
that
the trained
<term>
SPR
</term>
learns to select a
<term>
sentence plan
</term>
whose
<term>
rating
</term>
on average is only 5 % worse than the
<term>
top human-ranked sentence plan
</term>
.
|
#1435
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
|
Over two distinct
<term>
datasets
</term>
, we find
that
<term>
indexing
</term>
according to simple
<term>
character bigrams
</term>
produces a
<term>
retrieval accuracy
</term>
superior to any of the tested
<term>
word N-gram models
</term>
.
|
#1538
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
|
We also provide evidence
that
our findings are scalable .
|
#1591
We also provide evidence that our findings are scalable. |
|
I show
that
the
<term>
performance
</term>
of a
<term>
search engine
</term>
can be improved dramatically by incorporating an approximation of the
<term>
formal analysis
</term>
that is compatible with the
<term>
search engine
</term>
's
<term>
operational semantics
</term>
.
|
#1873
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
I show that the
<term>
performance
</term>
of a
<term>
search engine
</term>
can be improved dramatically by incorporating an approximation of the
<term>
formal analysis
</term>
that
is compatible with the
<term>
search engine
</term>
's
<term>
operational semantics
</term>
.
|
#1892
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysisthat is compatible with the search engine's operational semantics. |
|
The value of this approach is
that
as the
<term>
operational semantics
</term>
of
<term>
natural language applications
</term>
improve , even larger improvements are possible .
|
#1909
The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible. |
|
We provide a
<term>
logical definition
</term>
of
<term>
Minimalist grammars
</term>
,
that
are
<term>
Stabler 's formalization
</term>
of
<term>
Chomsky 's minimalist program
</term>
.
|
#1935
We provide a logical definition of Minimalist grammars, that are Stabler's formalization of Chomsky's minimalist program. |
|
We show
that
the
<term>
trainable sentence planner
</term>
performs better than the
<term>
rule-based systems
</term>
and the
<term>
baselines
</term>
, and as well as the
<term>
hand-crafted system
</term>
.
|
#2101
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
This paper describes a method for
<term>
utterance classification
</term>
that
does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2213
This paper describes a method for utterance classificationthat does not require manual transcription of training data. |
|
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that
is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2238
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performancethat is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
|
We present our
<term>
multi-level answer resolution algorithm
</term>
that
combines results from the
<term>
answering agents
</term>
at the
<term>
question , passage , and/or answer levels
</term>
.
|
#2379
We present our multi-level answer resolution algorithmthat combines results from the answering agents at the question, passage, and/or answer levels. |
|
We conducted an
<term>
annotation experiment
</term>
and showed
that
<term>
human annotators
</term>
can reliably differentiate between semantically coherent and incoherent
<term>
speech recognition hypotheses
</term>
.
|
#2486
We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses. |
|
An evaluation of our
<term>
system
</term>
against the
<term>
annotated data
</term>
shows
that
, it successfully classifies 73.2 % in a
<term>
German corpus
</term>
of 2.284
<term>
SRHs
</term>
as either coherent or incoherent ( given a
<term>
baseline
</term>
of 54.55 % ) .
|
#2511
An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%). |
|
We propose a new
<term>
phrase-based translation model
</term>
and
<term>
decoding algorithm
</term>
that
enables us to evaluate and compare several , previously proposed
<term>
phrase-based translation models
</term>
.
|
#2549
We propose a new phrase-based translation model and decoding algorithmthat enables us to evaluate and compare several, previously proposed phrase-based translation models. |