|
translation ( MT ) systems
</term>
. We believe
|
that
|
these
<term>
evaluation techniques
</term>
|
#583
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. |
|
with
<term>
intelligent mobile agents
</term>
|
that
|
mediate between
<term>
users
</term>
and
<term>
|
#806
We integrate a spoken language understanding system with intelligent mobile agentsthat mediate between users and information sources. |
|
<term>
language models ( LMs )
</term>
. We find
|
that
|
simple
<term>
interpolation methods
</term>
|
#1046
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. |
|
from
<term>
training data
</term>
. We show
|
that
|
the trained
<term>
SPR
</term>
learns to select
|
#1435
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
|
two distinct
<term>
datasets
</term>
, we find
|
that
|
<term>
indexing
</term>
according to simple
|
#1538
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
|
<term>
queries
</term>
containing them . I show
|
that
|
the
<term>
performance
</term>
of a
<term>
search
|
#1873
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
</term>
of
<term>
Minimalist grammars
</term>
,
|
that
|
are
<term>
Stabler 's formalization
</term>
|
#1935
We provide a logical definition of Minimalist grammars, that are Stabler's formalization of Chomsky's minimalist program. |
|
baseline sentence planners
</term>
. We show
|
that
|
the
<term>
trainable sentence planner
</term>
|
#2101
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
for
<term>
utterance classification
</term>
|
that
|
does not require
<term>
manual transcription
|
#2213
This paper describes a method for utterance classificationthat does not require manual transcription of training data. |
|
multi-level answer resolution algorithm
</term>
|
that
|
combines results from the
<term>
answering
|
#2379
We present our multi-level answer resolution algorithmthat combines results from the answering agents at the question, passage, and/or answer levels. |
|
<term>
annotation experiment
</term>
and showed
|
that
|
<term>
human annotators
</term>
can reliably
|
#2486
We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherent speech recognition hypotheses. |
|
</term>
and
<term>
decoding algorithm
</term>
|
that
|
enables us to evaluate and compare several
|
#2549
We propose a new phrase-based translation model and decoding algorithmthat enables us to evaluate and compare several, previously proposed phrase-based translation models. |
|
character recognition ( OCR ) model
</term>
|
that
|
describes an end-to-end process in the
<term>
|
#2683
In this paper, we introduce a generative probabilistic optical character recognition (OCR) modelthat describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. |
|
a new
<term>
part-of-speech tagger
</term>
|
that
|
demonstrates the following ideas : ( i
|
#2915
We present a new part-of-speech taggerthat demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
|
target
<term>
recognition task
</term>
, but also
|
that
|
it is possible to get bigger performance
|
#3060
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
<term>
algorithms
</term>
. The results show
|
that
|
it can provide a significant improvement
|
#3274
The results show that it can provide a significant improvement in alignment quality. |
|
or
<term>
pronoun
</term><term>
seeds
</term>
|
that
|
correspond to the
<term>
concept
</term>
for
|
#3316
This approach only requires a few common noun or pronoun seedsthat correspond to the concept for the targeted NE, e.g. he/she/man/woman for PERSON NE. |
|
<term>
statistical machine translation
</term>
|
that
|
uses a much simpler set of
<term>
model parameters
|
#3403
In this paper, we describe a phrase-based unigram model for statistical machine translationthat uses a much simpler set of model parameters than similar phrase-based models. |
|
novel , customizable
<term>
IE paradigm
</term>
|
that
|
takes advantage of
<term>
predicate-argument
|
#3722
In this paper we present a novel, customizable IE paradigmthat takes advantage of predicate-argument structures. |
|
The results of the experiments demonstrate
|
that
|
the
<term>
HDAG Kernel
</term>
is superior
|
#3870
The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods. |