other,18-3-N03-1012,bq |
semantically coherent and incoherent
<term>
|
speech
|
recognition hypotheses
</term>
. An evaluation
|
#2497
We conducted an annotation experiment and showed that human annotators can reliably differentiate between semantically coherent and incoherentspeech recognition hypotheses. |
tech,5-1-C94-1030,bq |
character recognition
</term>
and
<term>
continuous
|
speech
|
recognition
</term>
of a
<term>
natural language
|
#20619
In optical character recognition and continuous speech recognition of a natural language, it has been difficult to detect error characters which are wrongly deleted and inserted. |
other,70-5-E06-1035,bq |
<term>
cue phrases
</term>
and
<term>
overlapping
|
speech
|
</term>
, are better indicators for the top-level
|
#10597
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
lr,41-2-H90-1060,bq |
traditional practice of using a little
<term>
|
speech
|
</term>
from many
<term>
speakers
</term>
. In
|
#17029
First, we present a new paradigm for speaker-independent (SI) training of hidden Markov models (HMM), which uses a large amount of speech from a few speakers instead of the traditional practice of using a littlespeech from many speakers. |
tool,11-1-H92-1074,bq |
corpus
</term>
represents a new
<term>
DARPA
|
speech
|
recognition technology
</term>
development
|
#19538
The CSR (Connected Speech Recognition) corpus represents a new DARPA speech recognition technology development initiative to advance the state of the art in CSR. |
lr,27-3-H90-1060,bq |
than the usual pooling of all the
<term>
|
speech
|
data
</term>
from many
<term>
speakers
</term>
|
#17061
In addition, combination of the training speakers is done by averaging the statistics> of independently trained models rather than the usual pooling of all thespeech data from many speakers prior to training. |
lr,23-6-H90-1060,bq |
corpus
</term>
and a small amount of
<term>
|
speech
|
</term>
from the new ( target )
<term>
speaker
|
#17142
Second, we show a significant improvement for speaker adaptation (SA) using the new SI corpus and a small amount ofspeech from the new (target) speaker. |
other,26-2-H94-1034,bq |
<term>
likely repair
</term>
or as
<term>
fluent
|
speech
|
</term>
. Other contextual clues , such as
|
#21332
The tagger is given knowledge about category transitions for speechrepairs, and so is able to mark a transition either as a likely repair or as fluent speech. |
other,18-3-H92-1074,bq |
natural grammar
</term>
, and
<term>
spontaneous
|
speech
|
</term>
. This paper presents an overview
|
#19599
The new CSR corpus supports research on major new problems including unlimited vocabulary, natural grammar, and spontaneous speech. |
other,12-3-I05-5003,bq |
<term>
PER
</term>
which leverages
<term>
part of
|
speech
|
information
</term>
of the
<term>
words
</term>
|
#8381
We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence. |
other,10-2-N03-1012,bq |
of
<term>
scoring
</term>
alternative
<term>
|
speech
|
recognition hypotheses ( SRH )
</term>
in
|
#2466
We apply our system to the task of scoring alternativespeech recognition hypotheses (SRH) in terms of their semantic coherence. |
measure(ment),16-3-H92-1016,bq |
modifications combined to reduce the
<term>
|
speech
|
recognition word and sentence error rates
|
#18753
Together with the use of a larger training set, these modifications combined to reduce thespeech recognition word and sentence error rates by a factor of 2.5 and 1.6, respectively, on the October '91 test set. |
other,35-3-H92-1003,bq |
<term>
utterances
</term>
of
<term>
spontaneous
|
speech
|
</term>
from five sites for use in a
<term>
|
#18598
We summarize the motivation for this effort, the goals, the implementation of a multi-site data collection paradigm, and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of spontaneous speech from five sites for use in a multi-site common evaluation of speech, natural language and spoken language |
other,26-1-N01-1003,bq |
syntactic structure
</term>
for elementary
<term>
|
speech
|
acts
</term>
and the decision of how to combine
|
#1319
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementaryspeech acts and the decision of how to combine them into one or more sentences. |
other,8-1-H01-1017,bq |
users in robust ,
<term>
mixed-initiative
|
speech
|
dialogue interactions
</term>
which reach
|
#215
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. |
tech,17-3-H92-1036,bq |
unified approach for the following four
<term>
|
speech
|
recognition
</term>
applications , namely
|
#19116
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following fourspeech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training. |
tool,16-2-H92-1074,bq |
corpus
</term>
that has fueled
<term>
DARPA
|
speech
|
recognition technology
</term>
development
|
#19570
This corpus essentially supersedes the now old Resource Management (RM) corpus that has fueled DARPA speech recognition technology development for the past 5 years. |
tech,5-1-C92-3165,bq |
introduces a robust
<term>
interactive method for
|
speech
|
understanding
</term>
. The
<term>
generalized
|
#18146
This paper introduces a robust interactive method for speech understanding. |
other,7-1-H90-1060,bq |
contributions to
<term>
large vocabulary continuous
|
speech
|
recognition
</term>
. First , we present
|
#16985
This paper reports on two contributions to large vocabulary continuous speech recognition. |
other,8-1-C04-1103,bq |
important role in many
<term>
multilingual
|
speech
|
and language applications
</term>
. In this
|
#5738
Machine transliteration/back-transliteration plays an important role in many multilingual speech and language applications. |