tech,9-9-H90-1060,ak the <term> target speaker </term> for <term> adaptation </term> , the <term> error rate </term> dropped
other,10-5-H90-1060,ak comparable to our best condition for this <term> test suite </term> , using 109 <term> training speakers
tech,6-4-H90-1060,ak 12 <term> training speakers </term> for <term> SI recognition </term> , we achieved a 7.5 % <term> word error
model,14-2-H90-1060,ak speaker-independent ( SI ) training </term> of <term> hidden Markov models ( HMM ) </term> , which uses a large amount of <term>
other,26-6-H90-1060,ak amount of <term> speech </term> from the <term> new ( target ) speaker </term> . A <term> probabilistic spectral mapping
other,16-7-H90-1060,ak reference ) speaker </term> and the <term> target speaker </term> . Each <term> reference model </term>
tech,7-1-H90-1060,ak paper reports on two contributions to <term> large vocabulary continuous speech recognition </term> . First , we present a new paradigm
other,44-2-H90-1060,ak of using a little speech from many <term> speakers </term> . In addition , combination of the
other,15-5-H90-1060,ak <term> test suite </term> , using 109 <term> training speakers </term> . Second , we show a significant
lr-prod,26-4-H90-1060,ak </term> and <term> test set </term> from the <term> DARPA Resource Management corpus </term> . This performance is comparable
tech,33-3-H90-1060,ak many <term> speakers </term> prior to <term> training </term> . With only 12 <term> training speakers
lr,16-6-H90-1060,ak adaptation ( SA ) </term> using the new <term> SI corpus </term> and a small amount of <term> speech
other,10-8-H90-1060,ak is transformed to the space of the <term> target speaker </term> and combined by averaging . Using
lr,20-4-H90-1060,ak word error rate </term> on a standard <term> grammar </term> and <term> test set </term> from the <term>
other,9-7-H90-1060,ak is estimated independently for each <term> training ( reference ) speaker </term> and the <term> target speaker </term>
measure(ment),12-9-H90-1060,ak </term> for <term> adaptation </term> , the <term> error rate </term> dropped to 4.1 % --- a 45 % reduction
other,6-9-H90-1060,ak 40 <term> utterances </term> from the <term> target speaker </term> for <term> adaptation </term> , the <term>
other,3-4-H90-1060,ak <term> training </term> . With only 12 <term> training speakers </term> for <term> SI recognition </term> , we
lr,27-2-H90-1060,ak </term> , which uses a large amount of <term> speech </term> from a few <term> speakers </term> instead
lr,26-3-H90-1060,ak usual <term> pooling </term> of all the <term> speech data </term> from many <term> speakers </term> prior
hide detail