#1059We find that simple interpolation methods, like log-linear and linear interpolation, improve theperformance but fall short of the performance of an oracle.
measure(ment),21-2-H01-1058,ak
performance
</term>
but fall short of the
<term>
performance
</term>
of an
<term>
oracle
</term>
. The
<term>
#1065We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of theperformance of an oracle.
measure(ment),15-3-H01-1058,ak
<term>
word string
</term>
with the best
<term>
performance
</term>
( typically ,
<term>
word or semantic
#1085The oracle knows the reference word string and selects the word string with the bestperformance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
measure(ment),18-5-H01-1058,ak
model combination
</term>
to improve the
<term>
performance
</term>
further . We suggest a method that
#1149We provide experimental results that clearly show the need for a dynamic language model combination to improve theperformance further.
measure(ment),13-2-H01-1068,ak
</term>
of mission success and
<term>
component
performance
</term>
. We describe our use of this approach
#1221The three tiers measure user satisfaction, system support of mission success and component performance.
measure(ment),19-1-P01-1004,ak
segment contiguity
</term>
on the
<term>
retrieval
performance
</term>
of a
<term>
translation memory system
#1481In this paper, we compare the relative effects of segment order, segmentation and segment contiguity on the retrieval performance of a translation memory system.
measure(ment),4-3-P01-1009,ak
</term>
containing them . I show that the
<term>
performance
</term>
of a
<term>
search engine
</term>
can
#1876I show that theperformance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics.
measure(ment),7-3-P01-1070,ak
different aspects of the
<term>
predictive
performance
</term>
of our
<term>
models
</term>
, including
#2179We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables.
measure(ment),23-3-P01-1070,ak
training and testing factors on
<term>
predictive
performance
</term>
, and examine the relationships among
#2195We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables.
measure(ment),12-2-N03-1001,ak
</term>
to give
<term>
utterance classification
performance
</term>
that is surprisingly close to what
#2238The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
</term>
, suggest that the highest levels of
performance
can be obtained through relatively simple
#2607Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations.
models
</term>
does not have a strong impact on
performance
. Learning only
<term>
syntactically motivated
#2654Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance.
syntactically motivated phrases
</term>
degrades the
performance
of our systems . In this paper , we introduce
#2663Learning only syntactically motivated phrases degrades the performance of our systems.
but also that it is possible to get bigger
performance
gains from the
<term>
data
</term>
by using
#3068In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.
and
<term>
suffix
</term>
, achieving similar
performance
to more complex mixtures of techniques
#3221Those hubs mark the boundary between root and suffix, achieving similar performance to more complex mixtures of techniques.
measure(ment),5-6-N03-2025,ak
system
</term>
approaches
<term>
supervised NE
performance
</term>
for some
<term>
NE types
</term>
. In
#3384The resulting NE system approaches supervised NE performance for some NE types.
sentence alignment tasks
</term>
to evaluate its
performance
as a
<term>
similarity measure
</term>
and
#3855We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function.
arguments , we introduce a number of new
performance
enhancing techniques including
<term>
part
#4104Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists.
<term>
unstemmed text
</term>
, and 96 % of the
performance
of the proprietary
<term>
stemmer
</term>
above
#4595Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above.
</term>
. We believe this is a state-of-the-art
performance
and the
<term>
algorithm
</term>
can be used
#4773We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest.