We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2914We present a newpart-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,22-1-N03-1033,ak
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2932We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and followingtag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
model,26-1-N03-1033,ak
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2936We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via adependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,53-1-N03-1033,ak
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2963We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use ofpriors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features.
model,55-1-N03-1033,ak
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2965We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors inconditional loglinear models, and (iv) fine-grained modeling of unknown word features.
other,66-1-N03-1033,ak
We present a new
<term>
part-of-speech tagger
</term>
that demonstrates the following ideas : ( i ) explicit use of both preceding and following
<term>
tag contexts
</term>
via a
<term>
dependency network representation
</term>
, ( ii ) broad use of
<term>
lexical features
</term>
, including jointly conditioning on multiple consecutive words , ( iii ) effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling of
<term>
unknown word features
</term>
.
#2976We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling ofunknown word features.
tech,7-2-N03-1033,ak
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
.
#2987Using these ideas together, the resultingtagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
measure(ment),12-2-N03-1033,ak
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
.
#2992Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
tool,15-2-N03-1033,ak
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
.
#2995Using these ideas together, the resulting tagger gives a 97.24% accuracy on thePenn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
other,20-2-N03-1033,ak
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
.
#3000Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, anerror reduction of 4.4% on the best previous single automatically learned tagging result.
other,30-2-N03-1033,ak
Using these ideas together , the resulting
<term>
tagger
</term>
gives a 97.24 %
<term>
accuracy
</term>
on the
<term>
Penn Treebank WSJ
</term>
, an
<term>
error reduction
</term>
of 4.4 % on the best previous single
<term>
automatically learned tagging result
</term>
.
#3010Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous singleautomatically learned tagging result.