#2292Theclassification accuracy of the method is evaluated on three different spoken language system domains.
other,18-3-N03-1001,ak
n-gram model
</term>
for a particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
#2274In our method, unsupervised training is first used to train a phone n-gram model for a particulardomain; the output of recognition with this model is then passed to a phone-string classifier.
model,3-2-N03-1001,ak
training data
</term>
. The method combines
<term>
domain independent acoustic models
</term>
with
<term>
off-the-shelf classifiers
#2227The method combinesdomain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
tech,29-2-N03-1001,ak
word-trigram recognition
</term>
requiring
<term>
manual transcription
</term>
. In our method ,
<term>
unsupervised
#2253The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiringmanual transcription.
tech,12-1-N03-1001,ak
classification
</term>
that does not require
<term>
manual transcription
</term>
of
<term>
training data
</term>
. The
#2218This paper describes a method for utterance classification that does not requiremanual transcription of training data.
model,26-3-N03-1001,ak
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then passed to a
<term>
phone-string
#2282In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with thismodel is then passed to a phone-string classifier.
tech,8-2-N03-1001,ak
independent acoustic models
</term>
with
<term>
off-the-shelf classifiers
</term>
to give
<term>
utterance classification
#2232The method combines domain independent acoustic models withoff-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
other,21-3-N03-1001,ak
particular
<term>
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
#2277In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; theoutput of recognition with this model is then passed to a phone-string classifier.
model,12-3-N03-1001,ak
training
</term>
is first used to train a
<term>
phone n-gram model
</term>
for a particular
<term>
domain
</term>
#2268In our method, unsupervised training is first used to train aphone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier.
tech,32-3-N03-1001,ak
<term>
model
</term>
is then passed to a
<term>
phone-string classifier
</term>
. The
<term>
classification accuracy
#2288In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to aphone-string classifier.
tech,23-3-N03-1001,ak
domain
</term>
; the
<term>
output
</term>
of
<term>
recognition
</term>
with this
<term>
model
</term>
is then
#2279In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output ofrecognition with this model is then passed to a phone-string classifier.
other,11-4-N03-1001,ak
method is evaluated on three different
<term>
spoken language system domains
</term>
. Motivated by the success of
<term>
#2302The classification accuracy of the method is evaluated on three differentspoken language system domains.
lr,15-1-N03-1001,ak
<term>
manual transcription
</term>
of
<term>
training data
</term>
. The method combines
<term>
domain
#2221This paper describes a method for utterance classification that does not require manual transcription oftraining data.
tech,4-3-N03-1001,ak
transcription
</term>
. In our method ,
<term>
unsupervised training
</term>
is first used to train a
<term>
phone
#2260In our method,unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier.
tech,6-1-N03-1001,ak
This paper describes a method for
<term>
utterance classification
</term>
that does not require
<term>
manual
#2212This paper describes a method forutterance classification that does not require manual transcription of training data.
measure(ment),12-2-N03-1001,ak
off-the-shelf classifiers
</term>
to give
<term>
utterance classification performance
</term>
that is surprisingly close to what
#2236The method combines domain independent acoustic models with off-the-shelf classifiers to giveutterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
tech,26-2-N03-1001,ak
can be achieved using conventional
<term>
word-trigram recognition
</term>
requiring
<term>
manual transcription
#2250The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventionalword-trigram recognition requiring manual transcription.