measure(ment),15-2-H01-1058,bq interpolation </term> , improve the <term> performance </term> but fall short of the <term> performance
measure(ment),21-2-H01-1058,bq performance </term> but fall short of the <term> performance </term> of an <term> oracle </term> . The <term>
measure(ment),15-3-H01-1058,bq <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic
measure(ment),18-5-H01-1058,bq model combination </term> to improve the <term> performance </term> further . We suggest a method that
measure(ment),13-2-H01-1068,bq mission success </term> and <term> component performance </term> . We describe our use of this approach
measure(ment),19-1-P01-1004,bq segment contiguity </term> on the <term> retrieval performance </term> of a <term> translation memory system
measure(ment),4-3-P01-1009,bq </term> containing them . I show that the <term> performance </term> of a <term> search engine </term> can
measure(ment),7-3-P01-1070,bq different aspects of the <term> predictive performance </term> of our <term> models </term> , including
measure(ment),23-3-P01-1070,bq testing factors </term> on <term> predictive performance </term> , and examine the relationships among
measure(ment),12-2-N03-1001,bq </term> to give <term> utterance classification performance </term> that is surprisingly close to what
</term> , suggest that the highest levels of performance can be obtained through relatively simple
models </term> does not have a strong impact on performance . Learning only <term> syntactically motivated
syntactically motivated phrases </term> degrades the performance of our <term> systems </term> . In this paper
but also that it is possible to get bigger performance gains from the <term> data </term> by using
other,12-4-N03-2015,bq <term> suffix </term> , achieving similar <term> performance </term> to more complex mixtures of techniques
</term> approaches <term> supervised NE </term> performance for some <term> NE types </term> . In this
sentence alignment tasks </term> to evaluate its performance as a <term> similarity measure </term> and
arguments , we introduce a number of new performance enhancing techniques including <term> part
<term> unstemmed text </term> , and 96 % of the performance of the proprietary <term> stemmer </term> above
</term> . We believe this is a state-of-the-art performance and the <term> algorithm </term> can be used
hide detail