model,26-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
tool,15-2-N03-1033,ak Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single <term> automatically learned tagging result </term> .
model,55-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,36-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,30-2-N03-1033,ak Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single <term> automatically learned tagging result </term> .
other,66-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
tech,7-2-N03-1033,ak Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single <term> automatically learned tagging result </term> .
other,53-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,20-2-N03-1033,ak Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single <term> automatically learned tagging result </term> .
measure(ment),12-2-N03-1033,ak Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single <term> automatically learned tagging result </term> .
tech,4-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,22-1-N03-1033,ak We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including jointly conditioning on multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
hide detail