lr-prod,15-2-N03-1033,bq |
97.24 %
<term>
accuracy
</term>
on the
<term>
|
Penn Treebank WSJ
|
</term>
, an
<term>
error reduction
</term>
of
|
#2994
Using these ideas together, the resulting tagger gives a 97.24% accuracy on thePenn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),12-2-N03-1033,bq |
<term>
tagger
</term>
gives a 97.24 %
<term>
|
accuracy
|
</term>
on the
<term>
Penn Treebank WSJ
</term>
|
#2991
Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |
measure(ment),20-2-N03-1033,bq |
<term>
Penn Treebank WSJ
</term>
, an
<term>
|
error reduction
|
</term>
of 4.4 % on the best previous single
|
#2999
Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, anerror reduction of 4.4% on the best previous single automatically learned tagging result. |
model,55-1-N03-1033,bq |
effective use of
<term>
priors
</term>
in
<term>
|
conditional loglinear models
|
</term>
, and ( iv ) fine-grained modeling
|
#2964
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors inconditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,26-1-N03-1033,bq |
following
<term>
tag contexts
</term>
via a
<term>
|
dependency network representation
|
</term>
, ( ii ) broad use of
<term>
lexical
|
#2935
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via adependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,36-1-N03-1033,bq |
representation
</term>
, ( ii ) broad use of
<term>
|
lexical features
|
</term>
, including
<term>
jointly conditioning
|
#2945
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use oflexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,53-1-N03-1033,bq |
words
</term>
, ( iii ) effective use of
<term>
|
priors
|
</term>
in
<term>
conditional loglinear models
|
#2962
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use ofpriors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
other,66-1-N03-1033,bq |
and ( iv ) fine-grained modeling of
<term>
|
unknown word features
|
</term>
. Using these ideas together , the
|
#2975
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling ofunknown word features. |
tech,32-2-N03-1033,bq |
previous single automatically learned
<term>
|
tagging
|
</term>
result . Sources of
<term>
training
|
#3011
Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learnedtagging result. |
tech,4-1-N03-1033,bq |
parser/generator
</term>
. We present a new
<term>
|
part-of-speech tagger
|
</term>
that demonstrates the following ideas
|
#2913
We present a newpart-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
tech,40-1-N03-1033,bq |
lexical features
</term>
, including
<term>
|
jointly conditioning on multiple consecutive words
|
</term>
, ( iii ) effective use of
<term>
priors
|
#2949
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, includingjointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. |
tech,7-2-N03-1033,bq |
these ideas together , the resulting
<term>
|
tagger
|
</term>
gives a 97.24 %
<term>
accuracy
</term>
|
#2986
Using these ideas together, the resultingtagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result. |