measure(ment),7-2-N04-4028,bq |
Despite the successes of these systems ,
<term>
|
accuracy
|
</term>
will always be imperfect . For many
|
#6781
Despite the successes of these systems,accuracy will always be imperfect. |
measure(ment),18-7-A94-1007,bq |
system
</term>
, and provided about 75 %
<term>
|
accuracy
|
</term>
in the practical
<term>
translation
|
#19876
This model was practically implemented and incorporated into the English-Japanese MT system, and provided about 75%accuracy in the practical translation use. |
measure(ment),12-2-N03-1033,bq |
<term>
tagger
</term>
gives a 97.24 %
<term>
|
accuracy
|
</term>
on the
<term>
Penn Treebank WSJ
</term>
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),7-3-H01-1070,bq |
algorithm
</term>
reported more than 99 %
<term>
|
accuracy
|
</term>
in both
<term>
language identification
|
#1284
Our algorithm reported more than 99%accuracy in both language identification and key prediction. |
measure(ment),7-5-I05-5003,bq |
improvement in
<term>
paraphrase classification
|
accuracy
|
</term>
over all of the other
<term>
models
|
#8429
Our technique gives a substantial improvement in paraphrase classification accuracy over all of the other models used in the experiments. |
measure(ment),3-5-P03-1033,bq |
obtained reasonable
<term>
classification
|
accuracy
|
</term>
for all dimensions .
<term>
Dialogue
|
#4375
We obtained reasonable classification accuracy for all dimensions. |
measure(ment),1-4-N03-1001,bq |
classifier
</term>
. The
<term>
classification
|
accuracy
|
</term>
of the
<term>
method
</term>
is evaluated
|
#2292
The classification accuracy of the method is evaluated on three different spoken language system domains. |
|
language understanding
</term>
and have a high
|
accuracy
|
but little robustness and flexibility .
|
#3527
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
measure(ment),28-3-C04-1080,bq |
<term>
HMM training
</term>
that improves
<term>
|
accuracy
|
</term>
when training of
<term>
lexical probabilities
|
#5586
Observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms, we present a method of HMM training that improvesaccuracy when training of lexical probabilities is unstable. |
measure(ment),17-4-C04-1112,bq |
achieve a significant increase in
<term>
|
accuracy
|
</term>
over the
<term>
wordform model
</term>
|
#6072
Testing the lemma-based model on the Dutch SENSEVAL-2 test data, we achieve a significant increase inaccuracy over the wordform model. |
measure(ment),10-6-P03-1051,bq |
</term>
achieves around 97 %
<term>
exact match
|
accuracy
|
</term>
on a
<term>
test corpus
</term>
containing
|
#4755
The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens. |
measure(ment),21-4-H92-1017,bq |
</term>
to improving
<term>
OCR
</term><term>
|
accuracy
|
</term>
. We describe a
<term>
generative probabilistic
|
#18891
Finally, we briefly describe an experiment which we have done in extending the n-best speech/language integration architecture to improving OCRaccuracy. |
measure(ment),28-5-H92-1026,bq |
P-CFG
</term>
, increasing the
<term>
parsing
|
accuracy
|
</term>
rate from 60 % to 75 % , a 37 % reduction
|
#19036
In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. |
measure(ment),13-4-H94-1014,bq |
show a 7 % improvement in
<term>
recognition
|
accuracy
|
</term>
with the
<term>
mixture trigram models
|
#21277
Using the BU recognition system, experiments show a 7% improvement in recognition accuracy with the mixture trigram models as compared to using a trigram model. |
measure(ment),16-3-P01-1004,bq |
bigrams
</term>
produces a
<term>
retrieval
|
accuracy
|
</term>
superior to any of the tested
<term>
|
#1548
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
measure(ment),21-4-P01-1004,bq |
methods
</term>
in terms of
<term>
retrieval
|
accuracy
|
</term>
, but much faster . We also provide
|
#1581
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
measure(ment),4-5-P03-1051,bq |
improve the
<term>
segmentation
</term><term>
|
accuracy
|
</term>
, we use an
<term>
unsupervised algorithm
|
#4710
To improve the segmentationaccuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
measure(ment),11-4-P03-1058,bq |
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
|
accuracy
|
</term>
difference between the two approaches
|
#4880
On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
measure(ment),1-6-C92-1055,bq |
has been observed in the test . The
<term>
|
accuracy
|
rate
</term>
of
<term>
syntactic disambiguation
|
#17927
Theaccuracy rate of syntactic disambiguation is raised from 46.0% to 60.62% by using this novel approach. |
measure(ment),10-3-C04-1080,bq |
<term>
lexicon
</term>
greatly impacts the
<term>
|
accuracy
|
</term>
that can be achieved by the
<term>
|
#5568
Observing that the quality of the lexicon greatly impacts theaccuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |