other,11-1-E06-1035,ak problem of automatically predicting <term> segment boundaries </term> in <term> spoken multiparty dialogue
other,14-1-E06-1035,ak predicting <term> segment boundaries </term> in <term> spoken multiparty dialogue </term> . We extend prior work in two ways
other,10-3-E06-1035,ak have been proposed for predicting <term> top-level topic shifts </term> to the problem of identifying <term>
other,18-3-E06-1035,ak </term> to the problem of identifying <term> subtopic boundaries </term> . We then explore the impact on performance
other,9-4-E06-1035,ak the impact on performance of using <term> ASR output </term> as opposed to <term> human transcription
other,14-4-E06-1035,ak <term> ASR output </term> as opposed to <term> human transcription </term> . Examination of the effect of <term>
other,5-5-E06-1035,ak </term> . Examination of the effect of <term> features </term> shows that predicting top-level and
other,12-5-E06-1035,ak predicting top-level and predicting <term> subtopic boundaries </term> are two distinct tasks : ( 1 ) for
other,24-5-E06-1035,ak distinct tasks : ( 1 ) for predicting <term> subtopic boundaries </term> , the <term> lexical cohesion-based
tech,28-5-E06-1035,ak <term> subtopic boundaries </term> , the <term> lexical cohesion-based approach </term> alone can achieve competitive results
other,42-5-E06-1035,ak competitive results , ( 2 ) for predicting <term> top-level boundaries </term> , the <term> machine learning approach
tech,46-5-E06-1035,ak <term> top-level boundaries </term> , the <term> machine learning approach </term> that combines <term> lexical-cohesion
other,51-5-E06-1035,ak learning approach </term> that combines <term> lexical-cohesion and conversational features </term> performs best , and ( 3 ) <term> conversational
other,62-5-E06-1035,ak features </term> performs best , and ( 3 ) <term> conversational cues </term> , such as <term> cue phrases </term>
other,67-5-E06-1035,ak conversational cues </term> , such as <term> cue phrases </term> and <term> overlapping speech </term>
other,70-5-E06-1035,ak such as <term> cue phrases </term> and <term> overlapping speech </term> , are better indicators for the <term>
other,78-5-E06-1035,ak </term> , are better indicators for the <term> top-level prediction task </term> . We also find that the <term> transcription
other,5-6-E06-1035,ak task </term> . We also find that the <term> transcription errors </term> inevitable in <term> ASR output </term>
other,9-6-E06-1035,ak transcription errors </term> inevitable in <term> ASR output </term> have a negative impact on <term> models
model,16-6-E06-1035,ak output </term> have a negative impact on <term> models </term> that combine <term> lexical-cohesion
hide detail