H91-1053 about 10 % was observed using parameter smoothing with prior densities estimated
J95-3002 is derived . The effects of the parameter smoothing techniques on the robust learning
W08-0124 number of classes is small , no parameter smoothing is needed . For the cases where
J95-3002 are first estimated by various parameter smoothing methods ( Good 1953 ; Katz 1987
H91-1053 Bayesian learning applied to HMM parameter smoothing had an overall 10 % reduction
H92-1036 adaptation , speaker group modeling , parameter smoothing and corrective training . Tested
P06-1023 of an annotated parse tree . 3 Parameter Smoothing We extracted the grammar from
H91-1053 serves as a unified approach for parameter smoothing , speaker adaptation , speaker
W03-1201 This illustrates the advantage of parameter smoothing . Bayesian Marginal Probs : corgi
H91-1053 serve as a unified approach for parameter smoothing , speaker adaptation , and speaker
H91-1053 preliminary results applying to HMM parameter smoothing , speaker adaptation , and speaker
H91-1053 smoothing : Since the goal of parameter smoothing is to obtain robust HMM parameters
N09-1069 performed for 20 iterations .4 No parameter smoothing was used . All runs used a fixed
H91-1052 retraining ( adaptation ) , and parameter smoothing . Experimentally , this approach
H92-1036 two types of applica - tions : parameter smoothing and adaptation learning . For
J95-3002 and real tasks . The effects of parameter smoothing for null events with Turing 's
H92-1031 They show that it is useful for parameter smoothing as well as for speaker adaptation
H91-1053 speaker adaptation . Therefore parameter smoothing and model adaptation in which
H91-1053 model with the prior densities . * Parameter smoothing : Since the goal of parameter
P06-1055 enabling us to do more SM cycles . Parameter smoothing leads to even better accuracy
hide detail