#1092The oracle knows the reference word string and selects the word string with the best performance (typically, word or semantic error rate) from a list of word strings, where each word string has been obtained by using a different LM.
tech,7-2-N03-1018,ak
model
</term>
is designed for use in
<term>
error
correction
</term>
, with a focus on
<term>
#2720The model is designed for use inerror correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks.
measure(ment),20-3-N03-1018,ak
significantly reduce
<term>
character and word
error
rate
</term>
, and provide evaluation results
#2768We present an implementation of the model based on finite-state models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.
other,20-2-N03-1033,ak
<term>
Penn Treebank WSJ
</term>
, an
<term>
error
reduction
</term>
of 4.4 % on the best previous
#3000Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, anerror reduction of 4.4% on the best previous single automatically learned tagging result.
other,14-1-H05-1005,ak
</term>
in multilingual input to correct
<term>
errors
</term>
in
<term>
machine translation
</term>
#5165In this paper, we use the information redundancy in multilingual input to correcterrors in machine translation and thus improve the quality of multilingual summaries.
other,3-5-H05-1005,ak
information in English . We demonstrate how
<term>
errors
</term>
in the
<term>
machine translations
</term>
#5249We demonstrate howerrors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases.
other,2-4-I05-2013,ak
<term>
ILIMP
</term>
is 97,5 % . The few
<term>
errors
</term>
are analyzed in detail . Other tasks
#6189The fewerrors are analyzed in detail.
measure(ment),14-8-J05-1003,ak
13 % relative decrease in
<term>
F-measure
error
</term>
over the
<term>
baseline model ’s
</term>
#8215The new model achieved 89.75% F-measure, a 13% relative decrease in F-measure error over the baseline model’s score of 88.2%.
tech,0-4-P05-1048,ak
machine translation system
</term>
alone .
<term>
Error
analysis
</term>
suggests several key factors
#9245Using a state-of-the-art Chinese word sense disambiguation model to choose translation candidates for a typical IBM statistical MT system, we find that word sense disambiguation does not yield significantly better translation quality than the statistical machine translation system alone.Error analysis suggests several key factors behind this surprising finding, including inherent limitations of current statistical MT architectures.
measure(ment),9-5-P05-1056,ak
<term>
CRF model
</term>
yields a lower
<term>
error
rate
</term>
than the
<term>
HMM and Maxent
#9542In general, our CRF model yields a lowererror rate than the HMM and Maxent models on the NIST sentence boundary detection task in speech, although it is interesting to note that the best results are achieved by three-way voting among the classifiers.
measure(ment),20-4-P05-1058,ak
recall
</term>
, achieving a
<term>
relative
error
rate reduction
</term>
of 6.56 % as compared
#9775Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall, achieving a relative error rate reduction of 6.56% as compared with the state-of-the-art technologies.
measure(ment),4-4-P05-1073,ak
models
</term>
. This system achieves an
<term>
error
reduction
</term>
of 22 % on all
<term>
arguments
#10116This system achieves anerror reduction of 22% on all arguments and 32% on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank.
other,5-6-E06-1035,ak
We also find that the
<term>
transcription
errors
</term>
inevitable in
<term>
ASR output
</term>
#11551We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks.
tech,3-1-J86-1002,ak
multi-lingual texts
</term>
. A method for
<term>
error
correction
</term>
of
<term>
ill-formed input
#16174A method forerror correction of ill-formed input is described that acquires dialogue patterns in typical usage and uses these patterns to predict new inputs.
tech,0-2-J86-1002,ak
to predict new
<term>
inputs
</term>
.
<term>
Error
correction
</term>
is done by strongly biasing
#16197A method for error correction of ill-formed input is described that acquires dialogue patterns in typical usage and uses these patterns to predict new inputs.Error correction is done by strongly biasing parsing toward expected meanings unless clear evidence from the input shows the current sentence is not expected.
tech,12-4-J86-1002,ak
described that show the power of the
<term>
error
correction methodology
</term>
when stereotypic
#16255A series of tests are described that show the power of theerror correction methodology when stereotypic dialogue occurs.
other,7-2-C88-2160,ak
explanation of an
<term>
ambiguity
</term>
or an
<term>
error
</term>
for the purposes of correction does
#18515The explanation of an ambiguity or anerror for the purposes of correction does not use any concepts of the underlying linguistic theory: it is a reformulation of the erroneous or ambiguous sentence.
measure(ment),14-4-H90-1060,ak
recognition
</term>
, we achieved a 7.5 %
<term>
word
error
rate
</term>
on a standard
<term>
grammar
</term>
#21144With only 12 training speakers for SI recognition, we achieved a 7.5% word error rate on a standard grammar and test set from the DARPA Resource Management corpus.
measure(ment),12-9-H90-1060,ak
</term>
for
<term>
adaptation
</term>
, the
<term>
error
rate
</term>
dropped to 4.1 % --- a 45 %
#21258Using only 40 utterances from the target speaker for adaptation, theerror rate dropped to 4.1% --- a 45% reduction in error compared to the SI result.
dropped to 4.1 % --- a 45 % reduction in
error
compared to the SI result . This paper
#21270Using only 40 utterances from the target speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result.