model,55-1-N03-1033,bq |
effective use of
<term>
priors
</term>
in
<term>
|
conditional loglinear models
|
</term>
, and ( iv ) fine-grained modeling
|
#2964
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models , and (iv) fine-grained modeling of unknown word features. |