lr,16-6-H90-1060,bq adaptation ( SA ) </term> using the new <term> SI corpus </term> and a small amount of <term> speech
lr,20-4-H90-1060,bq word error rate </term> on a standard <term> grammar </term> and <term> test set </term> from the <term>
lr,22-4-H90-1060,bq a standard <term> grammar </term> and <term> test set </term> from the <term> DARPA Resource Management
lr,23-6-H90-1060,bq corpus </term> and a small amount of <term> speech </term> from the new ( target ) <term> speaker
lr,27-2-H90-1060,bq </term> , which uses a large amount of <term> speech </term> from a few <term> speakers </term> instead
lr,27-3-H90-1060,bq than the usual pooling of all the <term> speech data </term> from many <term> speakers </term> prior
lr,41-2-H90-1060,bq traditional practice of using a little <term> speech </term> from many <term> speakers </term> . In
lr-prod,26-4-H90-1060,bq </term> and <term> test set </term> from the <term> DARPA Resource Management corpus </term> . This <term> performance </term> is
measure(ment),1-5-H90-1060,bq Resource Management corpus </term> . This <term> performance </term> is comparable to our best condition
measure(ment),1-8-H90-1060,bq the <term> target speaker </term> . Each <term> reference model </term> is transformed to the <term> space </term>
measure(ment),12-9-H90-1060,bq </term> for <term> adaptation </term> , the <term> error rate </term> dropped to 4.1 % --- a 45 % reduction
measure(ment),14-4-H90-1060,bq recognition </term> , we achieved a 7.5 % <term> word error rate </term> on a standard <term> grammar </term>
model,16-3-H90-1060,bq averaging the <term> statistics > </term> of <term> independently trained models </term> rather than the usual pooling of
other,10-8-H90-1060,bq transformed to the <term> space </term> of the <term> target speaker </term> and combined by <term> averaging </term>
other,13-3-H90-1060,bq speakers </term> is done by averaging the <term> statistics > </term> of <term> independently trained models
other,15-5-H90-1060,bq condition for this test suite , using 109 <term> training speakers </term> . Second , we show a significant
other,16-7-H90-1060,bq reference ) speaker </term> and the <term> target speaker </term> . Each <term> reference model </term>
other,24-9-H90-1060,bq dropped to 4.1 % --- a 45 % reduction in <term> error </term> compared to the <term> SI </term> result
other,3-4-H90-1060,bq <term> training </term> . With only 12 <term> training speakers </term> for <term> SI recognition </term> , we
other,3-9-H90-1060,bq <term> averaging </term> . Using only 40 <term> utterances </term> from the <term> target speaker </term>
hide detail