other,8-3-H01-1040,ak of a preliminary , <term> qualitative user evaluation </term> of the <term> system </term> , which
tech,12-1-H01-1042,ak the efficacy of applying <term> automated evaluation techniques </term> , originally devised for
tech,20-1-H01-1042,ak </term> , originally devised for the <term> evaluation </term> of <term> human language learners </term>
tech,4-2-H01-1042,ak systems </term> . We believe that these <term> evaluation techniques </term> will provide information
tech,6-1-H01-1068,ak describe a three-tiered approach for <term> evaluation </term> of <term> spoken dialogue systems </term>
</term> of L . The results of a practical evaluation of this method on a <term> wide coverage
speech recognition hypotheses </term> . An evaluation of our <term> system </term> against the <term>
and word error rate </term> , and provide evaluation results involving <term> automatic extraction
tech,8-3-N03-1026,ak propose the use of standard <term> parser evaluation methods </term> for automatically evaluating
tech,1-4-N03-1026,ak condensation systems </term> . An <term> experimental evaluation </term> of <term> summarization quality </term>
tech,12-4-N03-1026,ak correlation between the <term> automatic parse-based evaluation </term> and a <term> manual evaluation </term>
tech,17-4-N03-1026,ak parse-based evaluation </term> and a <term> manual evaluation </term> of generated strings . Overall <term>
measure(ment),2-3-N03-2006,ak an <term> EBMT system </term> . The two <term> evaluation measures </term> of the <term> BLEU score </term>
tech,2-4-P03-1009,ak <term> polysemic verbs </term> . A novel <term> evaluation scheme </term> is proposed which accounts
developed at our laboratory . Experimental evaluation shows that the <term> cooperative responses
tech,0-7-P03-1050,ak unsupervised component </term> . <term> Task-based evaluation </term> using <term> Arabic information retrieval
measure(ment),30-3-H05-1095,ak accuracy </term> , as measured with the <term> NIST evaluation metric </term> . <term> Translations </term>
tech,5-1-H05-1117,ak recent developments in the <term> automatic evaluation </term> of <term> machine translation </term>
measure(ment),0-1-I05-2014,ak syntactic analysis system </term> . <term> Automatic evaluation metrics </term> for <term> Machine Translation
tech,30-1-I05-2021,ak performance </term> , using standard <term> WSD evaluation methodology </term> and datasets from the
hide detail