H89-1016 |
effectiveness of our extension of the
|
corrective training
|
algorithm to speaker-independent
|
H89-2039 |
and recognition 4 . Some form of
|
corrective training
|
to improve word discrimination
|
H89-1016 |
rate by 24-44 % . We modified the
|
corrective training
|
algorithm \ -LSB- 1 \ -RSB- for
|
H90-1060 |
model for confusable words . No
|
corrective training
|
was used for the BYBLOS results
|
H90-1063 |
have only applied our discrete
|
corrective training
|
algorithm \ -LSB- 15 \ -RSB-
|
H89-1016 |
that our extension of the IBM
|
corrective training
|
algorithm to continuous speech
|
H90-1063 |
improvement . Finally , we investigated
|
corrective training
|
for semicontinuous models . At
|
H90-1039 |
model . In some forms , including
|
corrective training
|
\ -LSB- 2 \ -RSB- , it is performed
|
H89-2008 |
phonological rules , and a new
|
corrective training
|
procedure . Lexical Expansion
|
H90-1039 |
problem . Evidence showing that
|
corrective training
|
inserts the training language
|
H89-1016 |
while significant , suggests that
|
corrective training
|
becomes overly specialized for
|
H89-1016 |
continuous speech recognition .
|
Corrective training
|
reduced SPHINX 's error rate
|
H89-1016 |
-LSB- 1 \ -RSB- introduced the
|
corrective training
|
algorithm for HMMs as an alternative
|
H90-1063 |
considers top N codewords , while the
|
corrective training
|
uses only the top 1 codeword
|
H89-2039 |
is primarily due to the use of
|
corrective training
|
and inter-word units . OVERALL
|
H89-2037 |
triphone models \ -LSB- 9 \ -RSB- and
|
corrective training
|
\ -LSB- 10 \ -RSB- were not used
|
H89-1024 |
problem and become , in effect ,
|
corrective training
|
\ -LSB- 12 \ -RSB- on the test
|
H89-2032 |
modeling , interword CD PLUs ,
|
corrective training
|
, and multiple lexical entry
|
H90-1058 |
sex-specific recognition models and from
|
corrective training
|
. SSI explored tree-structured
|
H89-1016 |
The third and final stage uses
|
corrective training
|
to refine the dis cnminatory
|