N06-2050 extracts could be utterances , too . Utterance selection is useful . First , it could
W14-0205 Section 3.2 we describe how the utterance selection is performed every time the virtual
P09-1062 summarization model , where it is used for utterance selection . 3.1 Finding common acoustic
W12-1605 summarization framework compared to utterance selection methods for leveraging context
W06-1643 observations and labels . In the case of utterance selection , the observation sequence x
W12-1605 words and remove redundancies than utterance selection methods ? Figure 3 demonstrates
W04-3003 improved using other representative utterance selection algorithms ( e.g. , selecting
P14-1115 estimation , we tune all parameters ( utterance selection and path ranking ) exhaustively
W06-1643 relations and exploit them in utterance selection . In the current work , we use
N07-1059 three different strategies of utterance selection from an N-best list . ( ER stands
W11-2026 have cited Gricean influence in utterance selection for intelligent tutor systems
N12-1041 ) = n n i = 1 We formulate the utterance selection problem as random walk on a directed
A92-1010 it to the user . 8 Controlling utterance selection The IGiNG system intends to produce
W12-2604 context-free summaries , whose utterance selections can be considered somewhat arbit
P08-1054 rich features , we formulated utterance selection as a standard binary classification
P08-1054 MMR was the method of choice for utterance selection in Zechner and Waibel ( 2000
W11-2026 belief-model synchronization and utterance selection is based on the above maxims
W02-0101 frequencies of occurrence . For utterance selection , the aspects to be considered
W12-2604 have been optimized to perform an utterance selection task that may not necessarily
P08-1054 selection To obtain a trainable utterance selection module that can utilize and compare
hide detail