measure(ment),12-2-N03-1033,bq <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term>
other,26-1-N03-1033,bq following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical
measure(ment),20-2-N03-1033,bq <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single
other,22-1-N03-1033,bq use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation
model,55-1-N03-1033,bq effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling
tech,40-1-N03-1033,bq lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors
tech,32-2-N03-1033,bq previous single automatically learned <term> tagging </term> result . Sources of <term> training
tech,4-1-N03-1033,bq parser/generator </term> . We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas
other,66-1-N03-1033,bq and ( iv ) fine-grained modeling of <term> unknown word features </term> . Using these ideas together , the
other,36-1-N03-1033,bq representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning
other,53-1-N03-1033,bq words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models
tech,7-2-N03-1033,bq these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term>
lr-prod,15-2-N03-1033,bq 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of
hide detail