tech,4-1-C04-1080,bq effect on both tasks . We present a new <term> HMM tagger </term> that exploits <term> context </term>
other,8-1-C04-1080,bq <term> HMM tagger </term> that exploits <term> context </term> on both sides of a <term> word </term>
other,14-1-C04-1080,bq <term> context </term> on both sides of a <term> word </term> to be tagged , and evaluate it in
other,25-1-C04-1080,bq tagged , and evaluate it in both the <term> unsupervised and supervised case </term> . Along the way , we present the
tech,11-2-C04-1080,bq first comprehensive comparison of <term> unsupervised methods for part-of-speech tagging </term> , noting that published results to
lr,28-2-C04-1080,bq date have not been comparable across <term> corpora </term> or <term> lexicons </term> . Observing
lr,30-2-C04-1080,bq comparable across <term> corpora </term> or <term> lexicons </term> . Observing that the quality of the
lr,6-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy
measure(ment),10-3-C04-1080,bq <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term>
tech,17-3-C04-1080,bq </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM
tech,24-3-C04-1080,bq algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term>
measure(ment),28-3-C04-1080,bq <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities
other,32-3-C04-1080,bq <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable . Finally , we show how
tech,7-4-C04-1080,bq unstable . Finally , we show how this new <term> tagger </term> achieves state-of-the-art results
tech,13-4-C04-1080,bq achieves state-of-the-art results in a <term> supervised , non-training intensive framework </term> . Past work of <term> generating referring
hide detail