measure(ment),7-3-H01-1070,bq |
Our
<term>
algorithm
</term>
reported more than 99 %
<term>
accuracy
</term>
in both
<term>
language identification
</term>
and
<term>
key prediction
</term>
.
|
#1284
Our algorithm reported more than 99%accuracy in both language identification and key prediction. |
measure(ment),16-3-P01-1004,bq |
Over two distinct
<term>
datasets
</term>
, we find that
<term>
indexing
</term>
according to simple
<term>
character bigrams
</term>
produces a
<term>
retrieval
accuracy
</term>
superior to any of the tested
<term>
word N-gram models
</term>
.
|
#1548
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
measure(ment),21-4-P01-1004,bq |
Further , in their optimum
<term>
configuration
</term>
,
<term>
bag-of-words methods
</term>
are shown to be equivalent to
<term>
segment order-sensitive methods
</term>
in terms of
<term>
retrieval
accuracy
</term>
, but much faster .
|
#1581
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
measure(ment),1-4-N03-1001,bq |
The
<term>
classification
accuracy
</term>
of the
<term>
method
</term>
is evaluated on three different
<term>
spoken language system domains
</term>
.
|
#2292
The classification accuracy of the method is evaluated on three different spoken language system domains. |
measure(ment),12-2-N03-1033,bq |
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single automatically learned
<term>
tagging
</term>
result .
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
|
<term>
FSM
</term>
provides two strategies for
<term>
language understanding
</term>
and have a high
accuracy
but little robustness and flexibility .
|
#3527
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
measure(ment),17-4-P03-1031,bq |
By holding multiple
<term>
candidates
</term>
for
<term>
understanding
</term>
results and resolving the
<term>
ambiguity
</term>
as the
<term>
dialogue
</term>
progresses , the
<term>
discourse understanding
accuracy
</term>
can be improved .
|
#4211
By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. |
measure(ment),3-5-P03-1033,bq |
We obtained reasonable
<term>
classification
accuracy
</term>
for all dimensions .
|
#4375
We obtained reasonable classification accuracy for all dimensions. |
measure(ment),4-5-P03-1051,bq |
To improve the
<term>
segmentation
</term><term>
accuracy
</term>
, we use an
<term>
unsupervised algorithm
</term>
for automatically acquiring new
<term>
stems
</term>
from a 155 million
<term>
word
</term><term>
unsegmented corpus
</term>
, and re-estimate the
<term>
model parameters
</term>
with the expanded
<term>
vocabulary
</term>
and
<term>
training corpus
</term>
.
|
#4710
To improve the segmentationaccuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
measure(ment),10-6-P03-1051,bq |
The resulting
<term>
Arabic word segmentation system
</term>
achieves around 97 %
<term>
exact match
accuracy
</term>
on a
<term>
test corpus
</term>
containing 28,449
<term>
word tokens
</term>
.
|
#4755
The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens. |
measure(ment),11-4-P03-1058,bq |
On a subset of the most difficult
<term>
SENSEVAL-2 nouns
</term>
, the
<term>
accuracy
</term>
difference between the two approaches is only 14.0 % , and the difference could narrow further to 6.5 % if we disregard the advantage that
<term>
manually sense-tagged data
</term>
have in their
<term>
sense coverage
</term>
.
|
#4880
On a subset of the most difficult SENSEVAL-2 nouns, theaccuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
measure(ment),10-3-C04-1080,bq |
Observing that the quality of the
<term>
lexicon
</term>
greatly impacts the
<term>
accuracy
</term>
that can be achieved by the
<term>
algorithms
</term>
, we present a method of
<term>
HMM training
</term>
that improves
<term>
accuracy
</term>
when training of
<term>
lexical probabilities
</term>
is unstable .
|
#5568
Observing that the quality of the lexicon greatly impacts theaccuracy that can be achieved by the algorithms, we present a method of HMM training that improves accuracy when training of lexical probabilities is unstable. |
measure(ment),28-3-C04-1080,bq |
Observing that the quality of the
<term>
lexicon
</term>
greatly impacts the
<term>
accuracy
</term>
that can be achieved by the
<term>
algorithms
</term>
, we present a method of
<term>
HMM training
</term>
that improves
<term>
accuracy
</term>
when training of
<term>
lexical probabilities
</term>
is unstable .
|
#5586
Observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms, we present a method of HMM training that improvesaccuracy when training of lexical probabilities is unstable. |
measure(ment),19-5-C04-1103,bq |
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the
<term>
transliteration
accuracy
</term>
significantly .
|
#5839
Our study reveals that the proposed method not only reduces an extensive system development effort but also improves the transliteration accuracy significantly. |
measure(ment),17-4-C04-1112,bq |
Testing the
<term>
lemma-based model
</term>
on the
<term>
Dutch SENSEVAL-2 test data
</term>
, we achieve a significant increase in
<term>
accuracy
</term>
over the
<term>
wordform model
</term>
.
|
#6072
Testing the lemma-based model on the Dutch SENSEVAL-2 test data, we achieve a significant increase inaccuracy over the wordform model. |
measure(ment),9-2-C04-1116,bq |
This paper proposes a new methodology to improve the
<term>
accuracy
</term>
of a
<term>
term aggregation system
</term>
using each author 's text as a coherent
<term>
corpus
</term>
.
|
#6123
This paper proposes a new methodology to improve theaccuracy of a term aggregation system using each author's text as a coherent corpus. |
measure(ment),5-5-C04-1116,bq |
Our proposed method improves the
<term>
accuracy
</term>
of our
<term>
term aggregation system
</term>
, showing that our approach is successful .
|
#6188
Our proposed method improves theaccuracy of our term aggregation system, showing that our approach is successful. |
measure(ment),7-2-N04-4028,bq |
Despite the successes of these systems ,
<term>
accuracy
</term>
will always be imperfect .
|
#6781
Despite the successes of these systems,accuracy will always be imperfect. |
measure(ment),23-3-H05-1095,bq |
A
<term>
statistical translation model
</term>
is also presented that deals such
<term>
phrases
</term>
, as well as a
<term>
training method
</term>
based on the maximization of
<term>
translation
accuracy
</term>
, as measured with the
<term>
NIST evaluation metric
</term>
.
|
#7394
A statistical translation model is also presented that deals such phrases, as well as a training method based on the maximization of translation accuracy, as measured with the NIST evaluation metric. |
measure(ment),5-4-I05-2021,bq |
Surprisingly however , the
<term>
WSD
</term><term>
accuracy
</term>
of
<term>
SMT models
</term>
has never been evaluated and compared with that of the dedicated
<term>
WSD models
</term>
.
|
#7899
Surprisingly however, the WSDaccuracy of SMT models has never been evaluated and compared with that of the dedicated WSD models. |