tech,4-1-N03-1033,ak parser/generator </term> . We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas
other,22-1-N03-1033,ak use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation
model,26-1-N03-1033,ak following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical
other,36-1-N03-1033,ak representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning
other,53-1-N03-1033,ak consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models
model,55-1-N03-1033,ak effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling
tech,7-2-N03-1033,ak these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term>
measure(ment),12-2-N03-1033,ak <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term>
tool,15-2-N03-1033,ak 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of
other,20-2-N03-1033,ak <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single
other,30-2-N03-1033,ak 4.4 % on the best previous single <term> automatically learned tagging result </term> . Sources of <term> training data </term>
hide detail