other,7-1-H90-1060,bq paper reports on two contributions to <term> large vocabulary continuous speech recognition </term> . First , we present a new paradigm
tech,8-2-H90-1060,bq First , we present a new paradigm for <term> speaker-independent ( SI ) training </term> of <term> hidden Markov models ( HMM
tech,14-2-H90-1060,bq speaker-independent ( SI ) training </term> of <term> hidden Markov models ( HMM ) </term> , which uses a large amount of <term>
lr,27-2-H90-1060,bq </term> , which uses a large amount of <term> speech </term> from a few <term> speakers </term> instead
other,31-2-H90-1060,bq amount of <term> speech </term> from a few <term> speakers </term> instead of the traditional practice
lr,41-2-H90-1060,bq traditional practice of using a little <term> speech </term> from many <term> speakers </term> . In
other,44-2-H90-1060,bq little <term> speech </term> from many <term> speakers </term> . In addition , combination of the
other,6-3-H90-1060,bq . In addition , combination of the <term> training speakers </term> is done by averaging the <term> statistics
other,13-3-H90-1060,bq speakers </term> is done by averaging the <term> statistics > </term> of <term> independently trained models
model,16-3-H90-1060,bq averaging the <term> statistics > </term> of <term> independently trained models </term> rather than the usual pooling of
lr,27-3-H90-1060,bq than the usual pooling of all the <term> speech data </term> from many <term> speakers </term> prior
other,31-3-H90-1060,bq the <term> speech data </term> from many <term> speakers </term> prior to <term> training </term> . With
tech,34-3-H90-1060,bq many <term> speakers </term> prior to <term> training </term> . With only 12 <term> training speakers
other,3-4-H90-1060,bq <term> training </term> . With only 12 <term> training speakers </term> for <term> SI recognition </term> , we
tech,6-4-H90-1060,bq 12 <term> training speakers </term> for <term> SI recognition </term> , we achieved a 7.5 % <term> word error
measure(ment),14-4-H90-1060,bq recognition </term> , we achieved a 7.5 % <term> word error rate </term> on a standard <term> grammar </term>
lr,20-4-H90-1060,bq word error rate </term> on a standard <term> grammar </term> and <term> test set </term> from the <term>
lr,22-4-H90-1060,bq a standard <term> grammar </term> and <term> test set </term> from the <term> DARPA Resource Management
lr-prod,26-4-H90-1060,bq </term> and <term> test set </term> from the <term> DARPA Resource Management corpus </term> . This <term> performance </term> is
measure(ment),1-5-H90-1060,bq Resource Management corpus </term> . This <term> performance </term> is comparable to our best condition
hide detail