following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical
#2936We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via adependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,36-1-N03-1033,ak
representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning
#2946We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use oflexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,66-1-N03-1033,ak
and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
. Using these ideas together , the
#2976We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling ofunknown word features.
tech,4-1-N03-1033,ak
parser/generator
</term>
. We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas
#2914We present a newpart-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,30-2-N03-1033,ak
4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
. Sources of
<term>
training data
</term>
#3010Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous singleautomatically learned tagging result.
other,22-1-N03-1033,ak
use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
#2932We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and followingtag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
tool,15-2-N03-1033,ak
97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of
#2995Using these ideas together, the resulting tagger gives a 97.24% accuracy on thePenn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
other,20-2-N03-1033,ak
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
#3000Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, anerror reduction of 4.4% on the best previous single automatically learned tagging result.
model,55-1-N03-1033,ak
effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling
#2965We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors inconditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,53-1-N03-1033,ak
consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
#2963We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use ofpriors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.