Positive filter
conditional, loglinear, models 1
(32.0 per million)
model,55-1-N03-1033,ak
effective use of
<term>
priors
</term>
in
<term>
conditional loglinear models
</term>
, and ( iv ) fine-grained modeling
#2965We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models , and (iv) fine-grained modeling of unknown word features.