other,12-1-N03-1001,bq |
This paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2217
This paper describes a method for utterance classification that does not require manual transcription of training data. |
other,29-2-N03-1001,bq |
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2252
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription . |
tech,32-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2287
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier . |
tech,4-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2259
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |