measure(ment),15-2-H01-1058,bq |
interpolation
</term>
, improve the
<term>
|
performance
|
</term>
but fall short of the
<term>
performance
|
#1059
We find that simple interpolation methods, like log-linear and linear interpolation, improve theperformance but fall short of the performance of an oracle. |
measure(ment),21-2-H01-1058,bq |
performance
</term>
but fall short of the
<term>
|
performance
|
</term>
of an
<term>
oracle
</term>
. The
<term>
|
#1065
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of theperformance of an oracle. |
measure(ment),15-3-H01-1058,bq |
<term>
word string
</term>
with the best
<term>
|
performance
|
</term>
( typically ,
<term>
word or semantic
|
#1085
The oracle knows the reference word string and selects the word string with the bestperformance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM. |
measure(ment),18-5-H01-1058,bq |
model combination
</term>
to improve the
<term>
|
performance
|
</term>
further . We suggest a method that
|
#1149
We provide experimental results that clearly show the need for a dynamic language model combination to improve theperformance further. |
measure(ment),13-2-H01-1068,bq |
mission success
</term>
and
<term>
component
|
performance
|
</term>
. We describe our use of this approach
|
#1221
The three tiers measure user satisfaction, system support of mission success and component performance. |
measure(ment),19-1-P01-1004,bq |
segment contiguity
</term>
on the
<term>
retrieval
|
performance
|
</term>
of a
<term>
translation memory system
|
#1481
In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of a translation memory system. |
measure(ment),4-3-P01-1009,bq |
</term>
containing them . I show that the
<term>
|
performance
|
</term>
of a
<term>
search engine
</term>
can
|
#1875
I show that theperformance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
measure(ment),7-3-P01-1070,bq |
different aspects of the
<term>
predictive
|
performance
|
</term>
of our
<term>
models
</term>
, including
|
#2178
We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables. |
measure(ment),23-3-P01-1070,bq |
testing factors
</term>
on
<term>
predictive
|
performance
|
</term>
, and examine the relationships among
|
#2194
We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables. |
measure(ment),12-2-N03-1001,bq |
</term>
to give
<term>
utterance classification
|
performance
|
</term>
that is surprisingly close to what
|
#2237
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
|
</term>
, suggest that the highest levels of
|
performance
|
can be obtained through relatively simple
|
#2606
Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
|
models
</term>
does not have a strong impact on
|
performance
|
. Learning only
<term>
syntactically motivated
|
#2653
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
syntactically motivated phrases
</term>
degrades the
|
performance
|
of our
<term>
systems
</term>
. In this paper
|
#2662
Learning only syntactically motivated phrases degrades the performance of our systems. |
|
but also that it is possible to get bigger
|
performance
|
gains from the
<term>
data
</term>
by using
|
#3067
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
other,12-4-N03-2015,bq |
<term>
suffix
</term>
, achieving similar
<term>
|
performance
|
</term>
to more complex mixtures of techniques
|
#3220
Those hubs mark the boundary between root and suffix, achieving similarperformance to more complex mixtures of techniques. |
|
</term>
approaches
<term>
supervised NE
</term>
|
performance
|
for some
<term>
NE types
</term>
. In this
|
#3383
The resulting NE system approaches supervised NEperformance for some NE types. |
|
sentence alignment tasks
</term>
to evaluate its
|
performance
|
as a
<term>
similarity measure
</term>
and
|
#3854
We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function. |
|
arguments , we introduce a number of new
|
performance
|
enhancing techniques including
<term>
part
|
#4103
Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. |
|
<term>
unstemmed text
</term>
, and 96 % of the
|
performance
|
of the proprietary
<term>
stemmer
</term>
above
|
#4593
Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above. |
|
</term>
. We believe this is a state-of-the-art
|
performance
|
and the
<term>
algorithm
</term>
can be used
|
#4771
We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. |