other,6-1-H01-1017,ak users in <term> robust , mixed-initiative speech dialogue interactions </term> which reach
tech,4-2-H01-1055,ak within reach . However , the improved <term> speech recognition </term> has brought to light
other,25-1-N01-1003,ak syntactic structure </term> for <term> elementary speech acts </term> and the decision of how to combine
other,10-2-N03-1012,ak to the task of scoring alternative <term> speech recognition hypotheses ( SRH ) </term> in
other,18-3-N03-1012,ak semantically coherent and incoherent <term> speech recognition hypotheses </term> . An evaluation
other,9-1-N03-2003,ak language modeling </term> of <term> conversational speech </term> are limited . In this paper , we
tech,15-3-P03-1030,ak enhancing techniques including <term> part of speech tagging </term> , new <term> similarity measures
tech,19-3-P03-1031,ak due to the <term> ambiguity </term> of <term> speech understanding </term> , it is not appropriate
other,12-3-I05-5003,ak <term> PER </term> which leverages <term> part of speech information </term> of the words contributing
tech,36-12-J05-1003,ak ranking tasks </term> , for example , <term> speech recognition </term> , <term> machine translation
other,4-1-P05-1056,ak Sentence boundary detection </term> in <term> speech </term> is important for enriching <term> speech
other,9-1-P05-1056,ak speech </term> is important for enriching <term> speech recognition output </term> , making it easier
corpora </term> ( conversational telephone speech and broadcast news speech ) on both <term>
conversational telephone speech and broadcast news speech ) on both <term> human transcriptions </term>
other,19-4-P05-1056,ak <term> human transcriptions </term> and <term> speech recognition output </term> . In general ,
other,25-5-P05-1056,ak sentence boundary detection task </term> in <term> speech </term> , although it is interesting to note
other,70-5-E06-1035,ak <term> cue phrases </term> and <term> overlapping speech </term> , are better indicators for the <term>
other,15-5-E89-1006,ak temporal perspective times </term> , the <term> speech time </term> and the <term> location time </term>
other,27-2-H89-1027,ak our approach attempts to express the <term> speech knowledge </term> within a formal framework
lr,19-3-H89-1027,ak automatically , using a large body of <term> speech data </term> . This paper describes the <term>
hide detail