other,31-2-H90-1060,bq |
amount of
<term>
speech
</term>
from a few
<term>
|
speakers
|
</term>
instead of the traditional practice
|
#17019
First, we present a new paradigm for speaker-independent (SI) training of hidden Markov models (HMM), which uses a large amount of speech from a fewspeakers instead of the traditional practice of using a little speech from many speakers. |
measure(ment),1-8-H90-1060,bq |
the
<term>
target speaker
</term>
. Each
<term>
|
reference model
|
</term>
is transformed to the
<term>
space
</term>
|
#17171
Eachreference model is transformed to the space of the target speaker and combined by averaging. |
other,31-3-H90-1060,bq |
the
<term>
speech data
</term>
from many
<term>
|
speakers
|
</term>
prior to
<term>
training
</term>
. With
|
#17065
In addition, combination of the training speakers is done by averaging the statistics> of independently trained models rather than the usual pooling of all the speech data from manyspeakers prior to training. |
measure(ment),14-4-H90-1060,bq |
recognition
</term>
, we achieved a 7.5 %
<term>
|
word error rate
|
</term>
on a standard
<term>
grammar
</term>
|
#17084
With only 12 training speakers for SI recognition, we achieved a 7.5%word error rate on a standard grammar and test set from the DARPA Resource Management corpus. |
other,30-6-H90-1060,bq |
speech
</term>
from the new ( target )
<term>
|
speaker
|
</term>
. A
<term>
probabilistic spectral mapping
|
#17149
Second, we show a significant improvement for speaker adaptation (SA) using the new SI corpus and a small amount of speech from the new (target)speaker. |
measure(ment),12-9-H90-1060,bq |
</term>
for
<term>
adaptation
</term>
, the
<term>
|
error rate
|
</term>
dropped to 4.1 % --- a 45 % reduction
|
#17199
Using only 40 utterances from the target speaker for adaptation, theerror rate dropped to 4.1% --- a 45% reduction in error compared to the SI result. |
lr,22-4-H90-1060,bq |
a standard
<term>
grammar
</term>
and
<term>
|
test set
|
</term>
from the
<term>
DARPA Resource Management
|
#17092
With only 12 training speakers for SI recognition, we achieved a 7.5% word error rate on a standard grammar andtest set from the DARPA Resource Management corpus. |
other,24-9-H90-1060,bq |
dropped to 4.1 % --- a 45 % reduction in
<term>
|
error
|
</term>
compared to the
<term>
SI
</term>
result
|
#17211
Using only 40 utterances from the target speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction inerror compared to the SI result. |
other,9-7-H90-1060,bq |
is estimated independently for each
<term>
|
training ( reference ) speaker
|
</term>
and the
<term>
target speaker
</term>
|
#17160
A probabilistic spectral mapping is estimated independently for eachtraining ( reference ) speaker and the target speaker. |
tech,15-8-H90-1060,bq |
target speaker
</term>
and combined by
<term>
|
averaging
|
</term>
. Using only 40
<term>
utterances
</term>
|
#17185
Each reference model is transformed to the space of the target speaker and combined byaveraging. |
other,6-9-H90-1060,bq |
40
<term>
utterances
</term>
from the
<term>
|
target speaker
|
</term>
for
<term>
adaptation
</term>
, the
<term>
|
#17193
Using only 40 utterances from thetarget speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result. |
lr,16-6-H90-1060,bq |
adaptation ( SA )
</term>
using the new
<term>
|
SI corpus
|
</term>
and a small amount of
<term>
speech
|
#17135
Second, we show a significant improvement for speaker adaptation (SA) using the newSI corpus and a small amount of speech from the new (target) speaker. |
measure(ment),1-5-H90-1060,bq |
Resource Management corpus
</term>
. This
<term>
|
performance
|
</term>
is comparable to our best condition
|
#17102
Thisperformance is comparable to our best condition for this test suite, using 109 training speakers. |
tech,14-2-H90-1060,bq |
speaker-independent ( SI ) training
</term>
of
<term>
|
hidden Markov models ( HMM )
|
</term>
, which uses a large amount of
<term>
|
#17002
First, we present a new paradigm for speaker-independent (SI) training ofhidden Markov models ( HMM ), which uses a large amount of speech from a few speakers instead of the traditional practice of using a little speech from many speakers. |
other,7-1-H90-1060,bq |
paper reports on two contributions to
<term>
|
large vocabulary continuous speech recognition
|
</term>
. First , we present a new paradigm
|
#16982
This paper reports on two contributions tolarge vocabulary continuous speech recognition. |
lr,23-6-H90-1060,bq |
corpus
</term>
and a small amount of
<term>
|
speech
|
</term>
from the new ( target )
<term>
speaker
|
#17142
Second, we show a significant improvement for speaker adaptation (SA) using the new SI corpus and a small amount ofspeech from the new (target) speaker. |
tech,1-7-H90-1060,bq |
( target )
<term>
speaker
</term>
. A
<term>
|
probabilistic spectral mapping
|
</term>
is estimated independently for each
|
#17152
Aprobabilistic spectral mapping is estimated independently for each training (reference) speaker and the target speaker. |
lr-prod,26-4-H90-1060,bq |
</term>
and
<term>
test set
</term>
from the
<term>
|
DARPA Resource Management corpus
|
</term>
. This
<term>
performance
</term>
is
|
#17096
With only 12 training speakers for SI recognition, we achieved a 7.5% word error rate on a standard grammar and test set from theDARPA Resource Management corpus. |
other,3-9-H90-1060,bq |
<term>
averaging
</term>
. Using only 40
<term>
|
utterances
|
</term>
from the
<term>
target speaker
</term>
|
#17190
Using only 40utterances from the target speaker for adaptation, the error rate dropped to 4.1% --- a 45% reduction in error compared to the SI result. |
other,13-3-H90-1060,bq |
speakers
</term>
is done by averaging the
<term>
|
statistics >
|
</term>
of
<term>
independently trained models
|
#17047
In addition, combination of the training speakers is done by averaging thestatistics > of independently trained models rather than the usual pooling of all the speech data from many speakers prior to training. |