#1284Our algorithm reported more than 99%accuracy in both language identification and key prediction.
measure(ment),16-3-P01-1004,ak
bigrams
</term>
produces a
<term>
retrieval
accuracy
</term>
superior to any of the tested
<term>
#1548Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models.
measure(ment),21-4-P01-1004,ak
methods
</term>
in terms of
<term>
retrieval
accuracy
</term>
, but much faster . We also provide
#1581Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster.
measure(ment),1-4-N03-1001,ak
classifier
</term>
. The
<term>
classification
accuracy
</term>
of the method is evaluated on three
#2293The classification accuracy of the method is evaluated on three different spoken language system domains.
measure(ment),12-2-N03-1033,ak
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
#2992Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
measure(ment),11-3-N03-3010,ak
understanding
</term>
and have a high
<term>
accuracy
</term>
but little robustness and flexibility
#3528FSM provides two strategies for language understanding and have a highaccuracy but little robustness and flexibility.
measure(ment),17-4-P03-1031,ak
progresses , the
<term>
discourse understanding
accuracy
</term>
can be improved . This paper proposes
#4213By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved.
measure(ment),3-5-P03-1033,ak
obtained reasonable
<term>
classification
accuracy
</term>
for all
<term>
dimensions
</term>
.
<term>
#4377We obtained reasonable classification accuracy for all dimensions.
measure(ment),3-5-P03-1051,ak
</term>
. To improve the
<term>
segmentation
accuracy
</term>
, we use an
<term>
unsupervised algorithm
#4712To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.
measure(ment),10-6-P03-1051,ak
</term>
achieves around 97 %
<term>
exact match
accuracy
</term>
on a
<term>
test corpus
</term>
containing
#4757The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens.
measure(ment),11-4-P03-1058,ak
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
accuracy
difference
</term>
between the two approaches
#4882On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage.
measure(ment),23-3-H05-1095,ak
<term>
maximization
</term>
of
<term>
translation
accuracy
</term>
, as measured with the
<term>
NIST
#5638A statistical translation model is also presented that deals such phrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric.
measure(ment),4-4-I05-2021,ak
</term>
. Surprisingly however , the
<term>
WSD
accuracy
</term>
of
<term>
SMT models
</term>
has never
#6442Surprisingly however, the WSD accuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models.
measure(ment),6-5-I05-2021,ak
controlled experiments showing the
<term>
WSD
accuracy
</term>
of current typical
<term>
SMT models
#6467We present controlled experiments showing the WSD accuracy of current typical SMT models to be significantly lower than that of all the dedicated WSD models considered.
measure(ment),20-6-I05-2044,ak
, showing improvement of
<term>
dependency
accuracy
</term>
by 10.08 % .
<term>
Statistical machine
#6732In experimental evaluation, our proposed method outperforms previous shift-reduce dependency parsers for the Chine language, showing improvement of dependency accuracy by 10.08%.
tech,7-5-I05-5003,ak
improvement in
<term>
paraphrase classification
accuracy
</term>
over all of the other models used
#7479Our technique gives a substantial improvement in paraphrase classification accuracy over all of the other models used in the experiments.
measure(ment),11-4-I05-5009,ak
proposed method achieves almost 60 %
<term>
accuracy
</term>
and that there is not a large performance
#7759Experimental results showed that the proposed method achieves almost 60%accuracy and that there is not a large performance difference between the two models.
measure(ment),8-5-I05-5009,ak
revealed an
<term>
upper bound
</term>
of
<term>
accuracy
</term>
of 77 % with the method when using
#7782The results also revealed an upper bound ofaccuracy of 77% with the method when using only topic information.
measure(ment),10-4-P05-1018,ak
model achieves significantly higher
<term>
accuracy
</term>
than a state-of-the-art
<term>
coherence
#8662Our experiments demonstrate that the induced model achieves significantly higheraccuracy than a state-of-the-art coherence model.
measure(ment),5-2-P05-1039,ak
corpus
</term>
. In addition to the high
<term>
accuracy
</term>
of the model , the use of
<term>
smoothing
#8990In addition to the highaccuracy of the model, the use of smoothing in an unlexicalized parser allows us to better examine the interplay between smoothing and parsing results.