tech,0-1-P05-1056,ak Evaluation on the <term> ACE corpus </term> shows that effective incorporation of diverse <term> features </term> enables our system outperform previously best-reported systems on the 24 <term> ACE relation subtypes </term> and significantly outperforms <term> tree kernel-based systems </term> by over 20 in <term> F-measure </term> on the 5 <term> ACE relation types </term> . <term> Sentence boundary detection </term> in <term> speech </term> is important for enriching <term> speech recognition output </term> , making it easier for humans to read and downstream modules to process .
other,4-1-P05-1056,ak <term> Sentence boundary detection </term> in <term> speech </term> is important for enriching <term> speech recognition output </term> , making it easier for humans to read and downstream modules to process .
other,9-1-P05-1056,ak <term> Sentence boundary detection </term> in <term> speech </term> is important for enriching <term> speech recognition output </term> , making it easier for humans to read and downstream modules to process .
tech,7-2-P05-1056,ak In previous work , we have developed <term> hidden Markov model ( HMM ) and maximum entropy ( Maxent ) classifiers </term> that integrate <term> textual and prosodic knowledge sources </term> for detecting <term> sentence boundaries </term> .
other,22-2-P05-1056,ak In previous work , we have developed <term> hidden Markov model ( HMM ) and maximum entropy ( Maxent ) classifiers </term> that integrate <term> textual and prosodic knowledge sources </term> for detecting <term> sentence boundaries </term> .
other,29-2-P05-1056,ak In previous work , we have developed <term> hidden Markov model ( HMM ) and maximum entropy ( Maxent ) classifiers </term> that integrate <term> textual and prosodic knowledge sources </term> for detecting <term> sentence boundaries </term> .
tech,10-3-P05-1056,ak In this paper , we evaluate the use of a <term> conditional random field ( CRF ) </term> for this task and relate results with this <term> model </term> to our prior work .
tech,24-3-P05-1056,ak In this paper , we evaluate the use of a <term> conditional random field ( CRF ) </term> for this task and relate results with this <term> model </term> to our prior work .
lr,4-4-P05-1056,ak We evaluate across two <term> corpora </term> ( conversational telephone speech and broadcast news speech ) on both <term> human transcriptions </term> and <term> speech recognition output </term> .
other,16-4-P05-1056,ak We evaluate across two <term> corpora </term> ( conversational telephone speech and broadcast news speech ) on both <term> human transcriptions </term> and <term> speech recognition output </term> .
other,19-4-P05-1056,ak We evaluate across two <term> corpora </term> ( conversational telephone speech and broadcast news speech ) on both <term> human transcriptions </term> and <term> speech recognition output </term> .
model,4-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
measure(ment),9-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
model,13-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
other,19-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
other,25-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
tech,40-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
tech,44-5-P05-1056,ak In general , our <term> CRF model </term> yields a lower <term> error rate </term> than the <term> HMM and Maxent models </term> on the <term> NIST sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note that the best results are achieved by <term> three-way voting </term> among the <term> classifiers </term> .
model,5-6-P05-1056,ak This probably occurs because each <term> model </term> has different strengths and weaknesses for modeling the <term> knowledge sources </term> .
other,14-6-P05-1056,ak This probably occurs because each <term> model </term> has different strengths and weaknesses for modeling the <term> knowledge sources </term> .
hide detail