other,8-1-H01-1017,bq |
users in robust ,
<term>
mixed-initiative
|
speech
|
dialogue interactions
</term>
which reach
|
#215
To support engaging human users in robust, mixed-initiative speech dialogue interactions which reach beyond current capabilities in dialogue systems, the DARPA Communicator program [1] is funding the development of a distributed message-passing infrastructure for dialogue systems which all Communicator participants are using. |
tech,4-2-H01-1055,bq |
within reach . However , the improved
<term>
|
speech
|
recognition
</term>
has brought to light
|
#934
However, the improvedspeech recognition has brought to light a new problem: as dialog systems understand more of what the user tells them, they need to be more sophisticated at responding to the user. |
other,26-1-N01-1003,bq |
syntactic structure
</term>
for elementary
<term>
|
speech
|
acts
</term>
and the decision of how to combine
|
#1319
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementaryspeech acts and the decision of how to combine them into one or more sentences. |
other,10-2-N03-1012,bq |
of
<term>
scoring
</term>
alternative
<term>
|
speech
|
recognition hypotheses ( SRH )
</term>
in
|
#2466
We apply our system to the task of scoring alternativespeech recognition hypotheses (SRH) in terms of their semantic coherence. |
other,9-1-N03-2003,bq |
language modeling
</term>
of
<term>
conversational
|
speech
|
</term>
are limited . In this paper , we
|
#3024
Sources of training data suitable for language modeling of conversational speech are limited. |
tech,15-3-P03-1030,bq |
enhancing techniques including
<term>
part of
|
speech
|
tagging
</term>
, new
<term>
similarity measures
|
#4109
Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. |
tech,19-3-P03-1031,bq |
due to the
<term>
ambiguity
</term>
of
<term>
|
speech
|
understanding
</term>
, it is not appropriate
|
#4174
Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity ofspeech understanding, it is not appropriate to decide on a single understandingresult after each user utterance. |
other,8-1-C04-1103,bq |
important role in many
<term>
multilingual
|
speech
|
and language applications
</term>
. In this
|
#5738
Machine transliteration/back-transliteration plays an important role in many multilingual speech and language applications. |
other,12-3-I05-5003,bq |
<term>
PER
</term>
which leverages
<term>
part of
|
speech
|
information
</term>
of the
<term>
words
</term>
|
#8381
We also introduce a novel classification method based on PER which leverages part of speech information of the words contributing to the word matches and non-matches in the sentence. |
tech,36-12-J05-1003,bq |
ranking tasks
</term>
, for example ,
<term>
|
speech
|
recognition
</term>
,
<term>
machine translation
|
#8972
Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example,speech recognition, machine translation, or natural language generation. |
other,70-5-E06-1035,bq |
<term>
cue phrases
</term>
and
<term>
overlapping
|
speech
|
</term>
, are better indicators for the top-level
|
#10597
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
tech,16-2-C90-3014,bq |
computational
<term>
phonological system
</term>
:
<term>
|
speech
|
recognition
</term>
and
<term>
synthesis system
|
#16397
The approach of KPSG provides an explicit development model for constructing a computational phonological system:speech recognition and synthesis system. |
other,7-1-H90-1060,bq |
contributions to
<term>
large vocabulary continuous
|
speech
|
recognition
</term>
. First , we present
|
#16985
This paper reports on two contributions to large vocabulary continuous speech recognition. |
tech,5-1-C92-3165,bq |
introduces a robust
<term>
interactive method for
|
speech
|
understanding
</term>
. The
<term>
generalized
|
#18146
This paper introduces a robust interactive method for speech understanding. |
other,35-3-H92-1003,bq |
<term>
utterances
</term>
of
<term>
spontaneous
|
speech
|
</term>
from five sites for use in a
<term>
|
#18598
We summarize the motivation for this effort, the goals, the implementation of a multi-site data collection paradigm, and the accomplishments of MADCOW in monitoring the collection and distribution of 12,000 utterances of spontaneous speech from five sites for use in a multi-site common evaluation of speech, natural language and spoken language |
other,15-1-H92-1010,bq |
</term><term>
LIMSI
</term>
in the field of
<term>
|
speech
|
processing
</term>
, but also in the related
|
#18632
The paper provides an overview of the research conducted at LIMSI in the field ofspeech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. |
measure(ment),16-3-H92-1016,bq |
modifications combined to reduce the
<term>
|
speech
|
recognition word and sentence error rates
|
#18753
Together with the use of a larger training set, these modifications combined to reduce thespeech recognition word and sentence error rates by a factor of 2.5 and 1.6, respectively, on the October '91 test set. |
tech,17-3-H92-1036,bq |
unified approach for the following four
<term>
|
speech
|
recognition
</term>
applications , namely
|
#19116
Because of its adaptive nature, Bayesian learning serves as a unified approach for the following fourspeech recognition applications, namely parameter smoothing, speaker adaptation, speaker group modeling and corrective training. |
tool,11-1-H92-1074,bq |
corpus
</term>
represents a new
<term>
DARPA
|
speech
|
recognition technology
</term>
development
|
#19538
The CSR (Connected Speech Recognition) corpus represents a new DARPA speech recognition technology development initiative to advance the state of the art in CSR. |
tech,28-1-H92-1095,bq |
<term>
language understanding
</term>
with
<term>
|
speech
|
recognition
</term>
,
<term>
knowledge-based
|
#19665
Language understanding work at Paramax focuses on applying general-purpose language understanding technology to spoken language understanding, text understanding, and document processing, integrating language understanding withspeech recognition, knowledge-based information retrieval and image understanding. |