|
the required
<term>
world knowledge
</term>
,
|
performance
|
degrades gracefully . Each of these techniques
|
#17523
For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. |
measure(ment),7-5-H05-1012,bq |
is contrasted with
<term>
human annotation
|
performance
|
</term>
. This paper presents a
<term>
phrase-based
|
#7335
Performance of the algorithm is contrasted with human annotation performance. |
measure(ment),15-3-H01-1058,bq |
<term>
word string
</term>
with the best
<term>
|
performance
|
</term>
( typically ,
<term>
word or semantic
|
#1085
The oracle knows the reference word string and selects the word string with the bestperformance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
|
but also that it is possible to get bigger
|
performance
|
gains from the
<term>
data
</term>
by using
|
#3067
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
measure(ment),12-2-N03-1001,bq |
</term>
to give
<term>
utterance classification
|
performance
|
</term>
that is surprisingly close to what
|
#2237
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
measure(ment),13-2-H01-1068,bq |
mission success
</term>
and
<term>
component
|
performance
|
</term>
. We describe our use of this approach
|
#1221
The three tiers measure user satisfaction, system support of mission success and component performance. |
measure(ment),23-1-I05-2021,bq |
directly on
<term>
word sense disambiguation
|
performance
|
</term>
, using standard
<term>
WSD evaluation
|
#7812
We present the first known empirical test of an increasingly common speculative claim, by evaluating a representative Chinese-to-English SMT model directly on word sense disambiguation performance, using standard WSD evaluation methodology and datasets from the Senseval-3 Chinese lexical sample task. |
|
</term>
. The preliminary experiments show good
|
performance
|
. Most importantly , the experimental objects
|
#20593
The preliminary experiments show good performance. |
measure(ment),37-2-C92-1055,bq |
method
</term>
, fail to achieve high
<term>
|
performance
|
</term>
in real applications . The proposed
|
#17858
Owing to the problem of insufficient training data and approximation error introduced by the language model, traditional statistical approaches, which resolve ambiguities by indirectly and implicitly using maximum likelihood method, fail to achieve highperformance in real applications. |
|
Results indicate that the system yields higher
|
performance
|
than a
<term>
baseline
</term>
on all three
|
#6745
Results indicate that the system yields higher performance than a baseline on all three aspects. |
|
but claims that direct imitation of human
|
performance
|
is not the best way to implement many of
|
#12596
This paper defends that view, but claims that direct imitation of human performance is not the best way to implement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satisfying human communication needs. |
|
model
</term>
provides significantly improved
|
performance
|
for sophisticated
<term>
representations
</term>
|
#20129
We then proceed to repeat results which show that standard statistical models are not particularly suitable for exploiting linguistically sophisticated representations, and show that a statistically fitted rule-based model provides significantly improved performance for sophisticated representations. |
|
not produced significant improvements in
|
performance
|
within the standard
<term>
term weighting
|
#19897
The use of NLP techniques for document classification has not produced significant improvements in performance within the standard term weighting statistical assignment paradigm (Fagan 1987; Lewis, 1992bc; Buckley, 1993). |
|
discuss its application , and evaluate its
|
performance
|
. State-of-the-art
<term>
Question Answering
|
#10727
We describe a clustering algorithm which is sufficiently general to be applied to these diverse problems, discuss its application, and evaluate its performance. |
|
sentence alignment tasks
</term>
to evaluate its
|
performance
|
as a
<term>
similarity measure
</term>
and
|
#3854
We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function. |
|
used to tune
<term>
statistical MT
</term>
|
performance
|
for specific
<term>
loss functions
</term>
|
#6639
Our results show that MBR decoding can be used to tune statistical MTperformance for specific loss functions. |
|
</term>
approaches
<term>
supervised NE
</term>
|
performance
|
for some
<term>
NE types
</term>
. In this
|
#3383
The resulting NE system approaches supervised NEperformance for some NE types. |
|
arguments , we introduce a number of new
|
performance
|
enhancing techniques including
<term>
part
|
#4103
Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. |
|
</term>
, suggest that the highest levels of
|
performance
|
can be obtained through relatively simple
|
#2606
Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
|
models
</term>
does not have a strong impact on
|
performance
|
. Learning only
<term>
syntactically motivated
|
#2653
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |