#583We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems.
language learning experiment
</term>
showed
that
<term>
assessors
</term>
can differentiate
<term>
#633A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words.
with
<term>
intelligent mobile agents
</term>
that
mediate between
<term>
users
</term>
and
<term>
#806We integrate a spoken language understanding system with intelligent mobile agentsthat mediate between users and information sources.
<term>
language models ( LMs )
</term>
. We find
that
simple
<term>
interpolation methods
</term>
#1046We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle.
</term>
. We provide experimental results
that
clearly show the need for a
<term>
dynamic
#1135We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further.
performance
</term>
further . We suggest a method
that
mimics the behavior of the
<term>
oracle
</term>
#1156We suggest a method that mimics the behavior of the oracle using a neural network or a decision tree.
from
<term>
training data
</term>
. We show
that
the trained
<term>
SPR
</term>
learns to select
#1435We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan.
) . Over two distinct datasets , we find
that
<term>
indexing
</term>
according to simple
#1538Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models.
but much faster . We also provide evidence
that
our findings are scalable . The theoretical
#1591We also provide evidence that our findings are scalable.
<term>
queries
</term>
containing them . I show
that
the
<term>
performance
</term>
of a
<term>
search
#1874I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics.
an approximation of the formal analysis
that
is compatible with the
<term>
search engine
#1893I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics.
semantics
</term>
. The value of this approach is
that
as the
<term>
operational semantics
</term>
#1910The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible.
</term>
of
<term>
Minimalist grammars
</term>
,
that
are
<term>
Stabler 's formalization
</term>
#1936We provide a logical definition of Minimalist grammars, that are Stabler's formalization of Chomsky's minimalist program.
baseline sentence planners
</term>
. We show
that
the
<term>
trainable sentence planner
</term>
#2102We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system.
for
<term>
utterance classification
</term>
that
does not require
<term>
manual transcription
#2214This paper describes a method for utterance classificationthat does not require manual transcription of training data.
utterance classification performance
</term>
that
is surprisingly close to what can be achieved
#2239The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performancethat is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
multi-level answer resolution algorithm
</term>
that
combines results from the
<term>
answering
#2380We present our multi-level answer resolution algorithmthat combines results from the answering agents at the question, passage, and/or answer levels.
<term>
annotation experiment
</term>
and showed
that
<term>
human annotators
</term>
can reliably
#2487We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses.
against the
<term>
annotated data
</term>
shows
that
, it successfully classifies 73.2 % in
#2512An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%).
</term>
and
<term>
decoding algorithm
</term>
that
enables us to evaluate and compare several
#2550We propose a new phrase-based translation model and decoding algorithmthat enables us to evaluate and compare several, previously proposed phrase-based translation models.