tech,4-1-C04-1080,bq We present a new <term> HMM tagger </term> that exploits <term> context </term> on both sides of a <term> word </term> to be tagged , and evaluate it in both the <term> unsupervised and supervised case </term> .
other,8-1-C04-1080,bq We present a new <term> HMM tagger </term> that exploits <term> context </term> on both sides of a <term> word </term> to be tagged , and evaluate it in both the <term> unsupervised and supervised case </term> .
other,14-1-C04-1080,bq We present a new <term> HMM tagger </term> that exploits <term> context </term> on both sides of a <term> word </term> to be tagged , and evaluate it in both the <term> unsupervised and supervised case </term> .
other,25-1-C04-1080,bq We present a new <term> HMM tagger </term> that exploits <term> context </term> on both sides of a <term> word </term> to be tagged , and evaluate it in both the <term> unsupervised and supervised case </term> .
tech,11-2-C04-1080,bq Along the way , we present the first comprehensive comparison of <term> unsupervised methods for part-of-speech tagging </term> , noting that published results to date have not been comparable across <term> corpora </term> or <term> lexicons </term> .
lr,28-2-C04-1080,bq Along the way , we present the first comprehensive comparison of <term> unsupervised methods for part-of-speech tagging </term> , noting that published results to date have not been comparable across <term> corpora </term> or <term> lexicons </term> .
lr,30-2-C04-1080,bq Along the way , we present the first comprehensive comparison of <term> unsupervised methods for part-of-speech tagging </term> , noting that published results to date have not been comparable across <term> corpora </term> or <term> lexicons </term> .
lr,6-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
measure(ment),10-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
tech,17-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
tech,24-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
measure(ment),28-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
other,32-3-C04-1080,bq Observing that the quality of the <term> lexicon </term> greatly impacts the <term> accuracy </term> that can be achieved by the <term> algorithms </term> , we present a method of <term> HMM training </term> that improves <term> accuracy </term> when training of <term> lexical probabilities </term> is unstable .
tech,7-4-C04-1080,bq Finally , we show how this new <term> tagger </term> achieves state-of-the-art results in a <term> supervised , non-training intensive framework </term> .
tech,13-4-C04-1080,bq Finally , we show how this new <term> tagger </term> achieves state-of-the-art results in a <term> supervised , non-training intensive framework </term> .
hide detail