tech,2-1-H92-1036,bq |
% reduction in error . We discuss
<term>
|
maximum a posteriori estimation
|
</term>
of
<term>
continuous density hidden
|
#19054
We discussmaximum a posteriori estimation of continuous density hidden Markov models (CDHMM). |
model,7-1-H92-1036,bq |
maximum a posteriori estimation
</term>
of
<term>
|
continuous density hidden Markov models ( CDHMM )
|
</term>
. The classical
<term>
MLE reestimation
|
#19059
We discuss maximum a posteriori estimation ofcontinuous density hidden Markov models ( CDHMM ). |
tech,2-2-H92-1036,bq |
models ( CDHMM )
</term>
. The classical
<term>
|
MLE reestimation algorithms
|
</term>
, namely the
<term>
forward-backward
|
#19070
The classicalMLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. |
tech,8-2-H92-1036,bq |
reestimation algorithms
</term>
, namely the
<term>
|
forward-backward algorithm
|
</term>
and the
<term>
segmental k-means algorithm
|
#19076
The classical MLE reestimation algorithms, namely theforward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. |
tech,12-2-H92-1036,bq |
forward-backward algorithm
</term>
and the
<term>
|
segmental k-means algorithm
|
</term>
, are expanded and
<term>
reestimation
|
#19080
The classical MLE reestimation algorithms, namely the forward-backward algorithm and thesegmental k-means algorithm, are expanded and reestimation formulas are given for HMM with Gaussian mixture observation densities. |
other,19-2-H92-1036,bq |
algorithm
</term>
, are expanded and
<term>
|
reestimation formulas
|
</term>
are given for
<term>
HMM with Gaussian
|
#19087
The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded andreestimation formulas are given for HMM with Gaussian mixture observation densities. |
model,24-2-H92-1036,bq |
reestimation formulas
</term>
are given for
<term>
|
HMM with Gaussian mixture observation densities
|
</term>
. Because of its adaptive nature
|
#19092
The classical MLE reestimation algorithms, namely the forward-backward algorithm and the segmental k-means algorithm, are expanded and reestimation formulas are given forHMM with Gaussian mixture observation densities. |
tech,6-3-H92-1036,bq |
. Because of its adaptive nature ,
<term>
|
Bayesian learning
|
</term>
serves as a unified approach for
|
#19105
Because of its adaptive nature,Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training. |
tech,17-3-H92-1036,bq |
unified approach for the following four
<term>
|
speech recognition
|
</term>
applications , namely
<term>
parameter
|
#19116
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following fourspeech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training. |
tech,22-3-H92-1036,bq |
recognition
</term>
applications , namely
<term>
|
parameter smoothing
|
</term>
,
<term>
speaker adaptation
</term>
,
|
#19121
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namelyparameter smoothing, speaker adaptation, speaker group modeling and corrective training. |
tech,25-3-H92-1036,bq |
namely
<term>
parameter smoothing
</term>
,
<term>
|
speaker adaptation
|
</term>
,
<term>
speaker group modeling
</term>
|
#19124
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing,speaker adaptation, speaker group modeling and corrective training. |
tech,32-3-H92-1036,bq |
<term>
speaker group modeling
</term>
and
<term>
|
corrective training
|
</term>
. New experimental results on all
|
#19131
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following four speech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling andcorrective training. |
tech,15-4-H92-1036,bq |
provided to show the effectiveness of the
<term>
|
MAP estimation approach
|
</term>
. It is well-known that there are
|
#19149
New experimental results on all four applications are provided to show the effectiveness of theMAP estimation approach. |