measure(ment),12-2-N03-1033,bq |
<term>
tagger
</term>
gives a 97.24 %
<term>
|
accuracy
|
</term>
on the
<term>
Penn Treebank WSJ
</term>
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),11-4-P03-1058,bq |
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
|
accuracy
|
</term>
difference between the two approaches
|
#4880
On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
measure(ment),10-3-C04-1080,bq |
<term>
lexicon
</term>
greatly impacts the
<term>
|
accuracy
|
</term>
that can be achieved by the
<term>
|
#5568
Observing that the quality of the lexicon greatly impacts theaccuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |
measure(ment),9-2-C04-1116,bq |
proposes a new methodology to improve the
<term>
|
accuracy
|
</term>
of a
<term>
term aggregation system
|
#6123
This paper proposes a new methodology to improve theaccuracy of a term aggregation system using each author's text as a coherent corpus. |
measure(ment),21-4-H92-1017,bq |
</term>
to improving
<term>
OCR
</term><term>
|
accuracy
|
</term>
. We describe a
<term>
generative probabilistic
|
#18891
Finally, we briefly describe an experiment which we have done in extending the n-best speech/language integration architecture to improving OCRaccuracy. |
measure(ment),5-5-C04-1116,bq |
. Our proposed method improves the
<term>
|
accuracy
|
</term>
of our
<term>
term aggregation system
|
#6188
Our proposed method improves theaccuracy of our term aggregation system, showing that our approach is successful. |
measure(ment),7-2-N04-4028,bq |
Despite the successes of these systems ,
<term>
|
accuracy
|
</term>
will always be imperfect . For many
|
#6781
Despite the successes of these systems,accuracy will always be imperfect. |
measure(ment),16-3-P01-1004,bq |
bigrams
</term>
produces a
<term>
retrieval
|
accuracy
|
</term>
superior to any of the tested
<term>
|
#1548
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
measure(ment),18-7-A94-1007,bq |
system
</term>
, and provided about 75 %
<term>
|
accuracy
|
</term>
in the practical
<term>
translation
|
#19876
This model was practically implemented and incorporated into the English-Japanese MT system, and provided about 75%accuracy in the practical translation use. |
measure(ment),10-6-P03-1051,bq |
</term>
achieves around 97 %
<term>
exact match
|
accuracy
|
</term>
on a
<term>
test corpus
</term>
containing
|
#4755
The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens. |
measure(ment),1-4-N03-1001,bq |
classifier
</term>
. The
<term>
classification
|
accuracy
|
</term>
of the
<term>
method
</term>
is evaluated
|
#2292
The classification accuracy of the method is evaluated on three different spoken language system domains. |
measure(ment),13-3-C92-1055,bq |
adjusting the parameters to maximize the
<term>
|
accuracy
|
rate
</term>
directly . To make the proposed
|
#17876
The proposed method remedies these problems by adjusting the parameters to maximize theaccuracy rate directly. |
measure(ment),5-4-I05-2021,bq |
Surprisingly however , the
<term>
WSD
</term><term>
|
accuracy
|
</term>
of
<term>
SMT models
</term>
has never
|
#7899
Surprisingly however, the WSDaccuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models. |
measure(ment),21-4-P01-1004,bq |
methods
</term>
in terms of
<term>
retrieval
|
accuracy
|
</term>
, but much faster . We also provide
|
#1581
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
measure(ment),7-3-H01-1070,bq |
algorithm
</term>
reported more than 99 %
<term>
|
accuracy
|
</term>
in both
<term>
language identification
|
#1284
Our algorithm reported more than 99%accuracy in both language identification and key prediction. |
measure(ment),1-6-C92-1055,bq |
has been observed in the test . The
<term>
|
accuracy
|
rate
</term>
of
<term>
syntactic disambiguation
|
#17927
Theaccuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach. |
measure(ment),28-5-H92-1026,bq |
P-CFG
</term>
, increasing the
<term>
parsing
|
accuracy
|
</term>
rate from 60 % to 75 % , a 37 % reduction
|
#19036
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |
measure(ment),23-3-H05-1095,bq |
on the maximization of
<term>
translation
|
accuracy
|
</term>
, as measured with the
<term>
NIST
|
#7394
A statistical translation model is also presented that deals such phrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric. |
measure(ment),19-5-C04-1103,bq |
but also improves the
<term>
transliteration
|
accuracy
|
</term>
significantly . The reality of
<term>
|
#5839
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
measure(ment),4-5-P03-1051,bq |
improve the
<term>
segmentation
</term><term>
|
accuracy
|
</term>
, we use an
<term>
unsupervised algorithm
|
#4710
To improve the segmentationaccuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |