measure(ment),7-3-H01-1070,bq |
algorithm
</term>
reported more than 99 %
<term>
|
accuracy
|
</term>
in both
<term>
language identification
|
#1284
Our algorithm reported more than 99%accuracy in both language identification and key prediction. |
measure(ment),16-3-P01-1004,bq |
bigrams
</term>
produces a
<term>
retrieval
|
accuracy
|
</term>
superior to any of the tested
<term>
|
#1548
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
measure(ment),1-4-N03-1001,bq |
classifier
</term>
. The
<term>
classification
|
accuracy
|
</term>
of the
<term>
method
</term>
is evaluated
|
#2292
The classification accuracy of the method is evaluated on three different spoken language system domains. |
measure(ment),12-2-N03-1033,bq |
<term>
tagger
</term>
gives a 97.24 %
<term>
|
accuracy
|
</term>
on the
<term>
Penn Treebank WSJ
</term>
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
|
language understanding
</term>
and have a high
|
accuracy
|
but little robustness and flexibility .
|
#3527
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
measure(ment),17-4-P03-1031,bq |
progresses , the
<term>
discourse understanding
|
accuracy
|
</term>
can be improved . This paper proposes
|
#4211
By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. |
measure(ment),3-5-P03-1033,bq |
obtained reasonable
<term>
classification
|
accuracy
|
</term>
for all dimensions .
<term>
Dialogue
|
#4375
We obtained reasonable classification accuracy for all dimensions. |
measure(ment),4-5-P03-1051,bq |
improve the
<term>
segmentation
</term><term>
|
accuracy
|
</term>
, we use an
<term>
unsupervised algorithm
|
#4710
To improve the segmentationaccuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
measure(ment),11-4-P03-1058,bq |
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
|
accuracy
|
</term>
difference between the two approaches
|
#4880
On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
measure(ment),10-3-C04-1080,bq |
<term>
lexicon
</term>
greatly impacts the
<term>
|
accuracy
|
</term>
that can be achieved by the
<term>
|
#5568
Observing that the quality of the lexicon greatly impacts theaccuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |
measure(ment),19-5-C04-1103,bq |
but also improves the
<term>
transliteration
|
accuracy
|
</term>
significantly . The reality of
<term>
|
#5839
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
measure(ment),17-4-C04-1112,bq |
achieve a significant increase in
<term>
|
accuracy
|
</term>
over the
<term>
wordform model
</term>
|
#6072
Testing the lemma-based model on the Dutch SENSEVAL-2 test data, we achieve a significant increase inaccuracy over the wordform model. |
measure(ment),9-2-C04-1116,bq |
proposes a new methodology to improve the
<term>
|
accuracy
|
</term>
of a
<term>
term aggregation system
|
#6123
This paper proposes a new methodology to improve theaccuracy of a term aggregation system using each author's text as a coherent corpus. |
measure(ment),7-2-N04-4028,bq |
Despite the successes of these systems ,
<term>
|
accuracy
|
</term>
will always be imperfect . For many
|
#6781
Despite the successes of these systems,accuracy will always be imperfect. |
measure(ment),23-3-H05-1095,bq |
on the maximization of
<term>
translation
|
accuracy
|
</term>
, as measured with the
<term>
NIST
|
#7394
A statistical translation model is also presented that deals such phrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric. |
measure(ment),5-4-I05-2021,bq |
Surprisingly however , the
<term>
WSD
</term><term>
|
accuracy
|
</term>
of
<term>
SMT models
</term>
has never
|
#7899
Surprisingly however, the WSDaccuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models. |
measure(ment),7-5-I05-5003,bq |
improvement in
<term>
paraphrase classification
|
accuracy
|
</term>
over all of the other
<term>
models
|
#8429
Our technique gives a substantial improvement in paraphrase classification accuracy over all of the other models used in the experiments. |
measure(ment),13-3-C92-1055,bq |
adjusting the parameters to maximize the
<term>
|
accuracy
|
rate
</term>
directly . To make the proposed
|
#17876
The proposed method remedies these problems by adjusting the parameters to maximize theaccuracy rate directly. |
measure(ment),21-4-H92-1017,bq |
</term>
to improving
<term>
OCR
</term><term>
|
accuracy
|
</term>
. We describe a
<term>
generative probabilistic
|
#18891
Finally, we briefly describe an experiment which we have done in extending the n-best speech/language integration architecture to improving OCRaccuracy. |
measure(ment),28-5-H92-1026,bq |
P-CFG
</term>
, increasing the
<term>
parsing
|
accuracy
|
</term>
rate from 60 % to 75 % , a 37 % reduction
|
#19036
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |