#24026We discussmaximum a posteriori estimation of continuous density hidden Markov models (CDHMM).
tech,7-1-H92-1036,ak
maximum a posteriori estimation
</term>
of
<term>
continuous density hidden Markov models ( CDHMM )
</term>
. The classical
<term>
MLE reestimation
#24031We discuss maximum a posteriori estimation ofcontinuous density hidden Markov models ( CDHMM ).
tech,2-2-H92-1036,ak
models ( CDHMM )
</term>
. The classical
<term>
MLE reestimation algorithms
</term>
, namely the
<term>
forward-backward
#24042The classicalMLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities.
tech,8-2-H92-1036,ak
reestimation algorithms
</term>
, namely the
<term>
forward-backward algorithm
</term>
and the
<term>
segmental k-means algorithm
#24048The classical MLE reestimation algorithms, namely theforward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities.
tech,12-2-H92-1036,ak
forward-backward algorithm
</term>
and the
<term>
segmental k-means algorithm
</term>
, are expanded and
<term>
reestimation
#24052The classical MLE reestimation algorithms, namely the forward-backward algorithm and thesegmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities.
other,19-2-H92-1036,ak
algorithm
</term>
, are expanded and
<term>
reestimation formulas
</term>
are given for
<term>
HMM
</term>
with
#24059The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded andreestimation formulas are given for HMM with Gaussian mixture observation densities.
tech,24-2-H92-1036,ak
reestimation formulas
</term>
are given for
<term>
HMM
</term>
with
<term>
Gaussian mixture observation
#24064The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given forHMM with Gaussian mixture observation densities.
tech,26-2-H92-1036,ak
</term>
are given for
<term>
HMM
</term>
with
<term>
Gaussian mixture observation densities
</term>
. Because of its adaptive nature
#24066The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM withGaussian mixture observation densities.
tech,6-3-H92-1036,ak
. Because of its adaptive nature ,
<term>
Bayesian learning
</term>
serves as a unified approach for
#24077Because of its adaptive nature,Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training.
tech,17-3-H92-1036,ak
unified approach for the following four
<term>
speech recognition applications
</term>
, namely
<term>
parameter smoothing
#24088Because of its adaptive nature, Bayesian learning serves as a unified approach for the following fourspeech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training.
tech,22-3-H92-1036,ak
recognition applications
</term>
, namely
<term>
parameter smoothing
</term>
,
<term>
speaker adaptation
</term>
,
#24093Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namelyparameter smoothing, speaker adaptation, speaker group modeling and corrective training.
#24096Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing,speaker adaptation, speaker group modeling and corrective training.
#24099Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation,speaker group modeling and corrective training.
tech,32-3-H92-1036,ak
<term>
speaker group modeling
</term>
and
<term>
corrective training
</term>
. New experimental results on all
#24103Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling andcorrective training.
tech,15-4-H92-1036,ak
provided to show the effectiveness of the
<term>
MAP estimation approach
</term>
. It is well-known that there are
#24121New experimental results on all four applications are provided to show the effectiveness of theMAP estimation approach.