Given the development of <term> storage media and networks </term> one could just record and store a <term> conversation </term> for documentation .
To support engaging human users in robust , <term> mixed-initiative speech dialogue interactions </term> which reach beyond current capabilities in <term> dialogue systems </term> , the <term> DARPA Communicator program </term> [ 1 ] is funding the development of a <term> distributed message-passing infrastructure </term> for <term> dialogue systems </term> which all <term> Communicator </term> participants are using .
In this presentation , we describe the features of and <term> requirements </term> for a genuinely useful <term> software infrastructure </term> for this purpose .
In this presentation , we describe the features of and <term> requirements </term> for a genuinely useful <term> software infrastructure </term> for this purpose .
Having been trained on <term> Korean newspaper articles </term> on missiles and chemical biological warfare , the <term> system </term> produces the <term> translation output </term> sufficient for content understanding of the <term> original document </term> .
The purpose of this research is to test the efficacy of applying <term> automated evaluation techniques </term> , originally devised for the <term> evaluation </term> of <term> human language learners </term> , to the <term> output </term> of <term> machine translation ( MT ) systems </term> .
<term> Listen-Communicate-Show ( LCS ) </term> is a new paradigm for <term> human interaction with data sources </term> .
The request is passed to a <term> mobile , intelligent agent </term> for execution at the appropriate <term> database </term> .
We provide experimental results that clearly show the need for a <term> dynamic language model combination </term> to improve the <term> performance </term> further .
We describe a three-tiered approach for <term> evaluation </term> of <term> spoken dialogue systems </term> .
This paper proposes a practical approach employing <term> n-gram models </term> and <term> error-correction rules </term> for <term> Thai key prediction </term> and <term> Thai-English language identification </term> .
<term> Sentence planning </term> is a set of inter-related but distinct tasks , one of which is <term> sentence scoping </term> , i.e. the choice of <term> syntactic structure </term> for elementary <term> speech acts </term> and the decision of how to combine them into one or more <term> sentences </term> .
In this paper , we present <term> SPoT </term> , a <term> sentence planner </term> , and a new methodology for automatically training <term> SPoT </term> on the basis of <term> feedback </term> provided by <term> human judges </term> .
First , a very simple , <term> randomized sentence-plan-generator ( SPG ) </term> generates a potentially large list of possible <term> sentence plans </term> for a given <term> text-plan input </term> .
The <term> non-deterministic parsing choices </term> of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term> which uses the <term> shared derivation forest </term> output by a prior <term> RCL parser </term> for a suitable <term> superset of L.
The <term> non-deterministic parsing choices </term> of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term> which uses the <term> shared derivation forest </term> output by a prior <term> RCL parser </term> for a suitable <term> superset of L.
While <term> paraphrasing </term> is critical both for <term> interpretation and generation of natural language </term> , current systems use manual or semi-automatic methods to collect <term> paraphrases </term> .
We present an <term> unsupervised learning algorithm </term> for <term> identification of paraphrases </term> from a <term> corpus of multiple English translations </term> of the same <term> source text </term> .
This paper presents a <term> formal analysis </term> for a large class of <term> words </term> called <term> alternative markers </term> , which includes <term> other ( than ) </term> , <term> such ( as ) </term> , and <term> besides </term> .
tech,0-1-P01-1056,bq <term> Techniques for automatically training </term> modules of a <term> natural language generator </term> have recently been proposed , but a fundamental concern is whether the <term> quality </term> of <term> utterances </term> produced with <term> trainable components </term> can compete with <term> hand-crafted template-based or rule-based approaches </term> .
hide detail