measure(ment),19-3-H01-1058,bq </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings
tech,7-2-N03-1018,bq model </term> is designed for use in <term> error correction </term> , with a focus on <term>
measure(ment),20-3-N03-1018,bq significantly reduce <term> character and word error rate </term> , and provide evaluation results
measure(ment),20-2-N03-1033,bq <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous
other,16-5-C04-1192,bq be used to check and spot <term> alignment errors </term> in <term> multilingually aligned wordnets
other,6-2-N04-1022,bq minimize <term> expected loss of translation errors </term> under <term> loss functions </term> that
<term> multilingual input </term> to correct errors in <term> machine translation </term> and thus
<term> English </term> . We demonstrate how errors in the <term> machine translations </term>
relative decrease in <term> F-measure </term> error over the <term> baseline model ’s score </term>
tech,0-4-P05-1048,bq machine translation system </term> alone . <term> Error analysis </term> suggests several key factors
other,5-6-E06-1035,bq We also find that the <term> transcription errors </term> inevitable in <term> ASR output </term>
tech,3-1-J86-1002,bq multMingual texts </term> . A method for <term> error correction </term> of <term> ill-formed input
tech,0-2-J86-1002,bq patterns </term> to predict new inputs . <term> Error correction </term> is done by strongly biasing
tech,12-4-J86-1002,bq described that show the power of the <term> error correction methodology </term> when <term>
explanation of an <term> ambiguity </term> or an error for the purposes of correction does not
measure(ment),14-4-H90-1060,bq recognition </term> , we achieved a 7.5 % <term> word error rate </term> on a standard <term> grammar </term>
measure(ment),12-9-H90-1060,bq </term> for <term> adaptation </term> , the <term> error rate </term> dropped to 4.1 % --- a 45 %
other,24-9-H90-1060,bq dropped to 4.1 % --- a 45 % reduction in <term> error </term> compared to the <term> SI </term> result
other,9-2-C92-1055,bq training data </term> and <term> approximation error </term> introduced by the <term> language model
proceeds from left to right correcting minor errors . When at very noisy <term> portion </term>
hide detail