tech,6-1-N03-1001,bq |
This paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2211
This paper describes a method forutterance classification that does not require manual transcription of training data. |
other,12-1-N03-1001,bq |
This paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2217
This paper describes a method for utterance classification that does not requiremanual transcription of training data. |
lr,15-1-N03-1001,bq |
This paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
.
|
#2220
This paper describes a method for utterance classification that does not require manual transcription oftraining data. |
model,3-2-N03-1001,bq |
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2226
The method combinesdomain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
tech,9-2-N03-1001,bq |
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2232
The method combines domain independent acoustic models with off-the-shelfclassifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. |
tech,26-2-N03-1001,bq |
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2249
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventionalword-trigram recognition requiring manual transcription. |
other,29-2-N03-1001,bq |
The method combines
<term>
domain independent acoustic models
</term>
with off-the-shelf
<term>
classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
.
|
#2252
The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiringmanual transcription. |
tech,4-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2259
In our method,unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |
other,18-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2273
In our method, unsupervised training is first used to train a phone n-gram model for a particulardomain; the output of recognition with this model is then passed to a phone-string classifier. |
other,21-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2276
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; theoutput of recognition with this model is then passed to a phone-string classifier. |
tech,23-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2278
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output ofrecognition with this model is then passed to a phone-string classifier. |
model,26-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2281
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with thismodel is then passed to a phone-string classifier. |
tech,32-3-N03-1001,bq |
In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
.
|
#2287
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to aphone-string classifier. |
measure(ment),1-4-N03-1001,bq |
The
<term>
classification accuracy
</term>
of the
<term>
method
</term>
is evaluated on three different
<term>
spoken language system domains
</term>
.
|
#2291
Theclassification accuracy of the method is evaluated on three different spoken language system domains. |
tech,5-4-N03-1001,bq |
The
<term>
classification accuracy
</term>
of the
<term>
method
</term>
is evaluated on three different
<term>
spoken language system domains
</term>
.
|
#2295
The classification accuracy of themethod is evaluated on three different spoken language system domains. |
other,11-4-N03-1001,bq |
The
<term>
classification accuracy
</term>
of the
<term>
method
</term>
is evaluated on three different
<term>
spoken language system domains
</term>
.
|
#2301
The classification accuracy of the method is evaluated on three differentspoken language system domains. |