tech,40-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
lr-prod,15-2-N03-1033,bq Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single automatically learned <term> tagging </term> result .
measure(ment),20-2-N03-1033,bq Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single automatically learned <term> tagging </term> result .
tech,32-2-N03-1033,bq Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single automatically learned <term> tagging </term> result .
tech,4-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,53-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,26-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,22-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
model,55-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
measure(ment),12-2-N03-1033,bq Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single automatically learned <term> tagging </term> result .
other,36-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
other,66-1-N03-1033,bq We present a new <term> part-of-speech tagger </term> that demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag contexts </term> via a <term> dependency network representation </term> , ( ii ) broad use of <term> lexical features </term> , including <term> jointly conditioning on multiple consecutive words </term> , ( iii ) effective use of <term> priors </term> in <term> conditional loglinear models </term> , and ( iv ) fine-grained modeling of <term> unknown word features </term> .
tech,7-2-N03-1033,bq Using these ideas together , the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term> Penn Treebank WSJ </term> , an <term> error reduction </term> of 4.4 % on the best previous single automatically learned <term> tagging </term> result .
hide detail