We address appropriate <term> user modeling </term> in order to generate <term> cooperative responses </term> to each <term> user </term> in <term> spoken dialogue systems </term> .
A method is described of combining the two styles in a single system in order to take advantage of the strengths of each .
Its selection of <term> fonts </term> , specification of <term> character </term> size are dynamically changeable , and the <term> typing location </term> can be also changed in lateral or longitudinal directions . Each <term> character </term> has its own width and a line length is counted by the sum of each <term> character </term> .
The system identities a strength of <term> antecedence recovery </term> for each of the <term> lexical substitutions </term> , and matches them against the <term> strength of potential antecedence </term> of each element in the <term> text </term> to select the proper <term> substitutions </term> for these elements .
As each new <term> edge </term> is added to the <term> chart </term> , the algorithm checks only the topmost of the <term> edges </term> adjacent to it , rather than all such <term> edges </term> as in conventional treatments .
The <term> intentional structure </term> captures the <term> discourse-relevant purposes </term> , expressed in each of the <term> linguistic segments </term> as well as relationships among them .
The <term> attentional state </term> , being dynamic , records the objects , properties , and relations that are salient at each point of the <term> discourse </term> .
We take a selection of both <term> bag-of-words and segment order-sensitive string comparison methods </term> , and run each over both <term> character - and word-segmented data </term> , in combination with a range of <term> local segment contiguity models </term> ( in the form of <term> N-grams </term> ) .
According to our assumption , most of the words with similar <term> context features </term> in each author 's <term> corpus </term> tend not to be <term> synonymous expressions </term> .
We train a <term> maximum entropy classifier </term> that , given a pair of <term> sentences </term> , can reliably determine whether or not they are <term> translations </term> of each other .
Since multiple <term> candidates </term> for the <term> understanding </term> result can be obtained for a <term> user utterance </term> due to the <term> ambiguity </term> of <term> speech understanding </term> , it is not appropriate to decide on a single <term> understandingresult </term> after each <term> user utterance </term> .
A <term> probabilistic spectral mapping </term> is estimated independently for each <term> training ( reference ) speaker </term> and the <term> target speaker </term> . Each <term> reference model </term> is transformed to the <term> space </term> of the <term> target speaker </term> and combined by <term> averaging </term> .
Thus we have implemented a <term> blackboard-like architecture </term> in which individual <term> partial theories </term> can be encoded as separate modules that can interact to propose candidate <term> antecedents </term> and to evaluate each other 's proposals .
The operation of the <term> system </term> will be explained in depth through browsing the <term> repository </term> of <term> data objects </term> created by the <term> system </term> during each <term> question answering session </term> .
The <term> oracle </term> knows the <term> reference word string </term> and selects the <term> word string </term> with the best <term> performance </term> ( typically , <term> word or semantic error rate </term> ) from a list of <term> word strings </term> , where each <term> word string </term> has been obtained by using a different <term> LM </term> .
For <term> pragmatics processing </term> , we describe how the method of <term> abductive inference </term> is inherently robust , in that an interpretation is always possible , so that in the absence of the required <term> world knowledge </term> , performance degrades gracefully . Each of these techniques have been evaluated and the results of the evaluations are presented .
Although every <term> natural language system </term> needs a <term> computational lexicon </term> , each <term> system </term> puts different amounts and types of information into its <term> lexicon </term> according to its individual needs .
Because a <term> speaker </term> and <term> listener </term> can not be assured to have the same <term> beliefs </term> , <term> contexts </term> , <term> perceptions </term> , <term> backgrounds </term> , or <term> goals </term> , at each point in a <term> conversation </term> , difficulties and mistakes arise when a <term> listener </term> interprets a <term> speaker 's utterance </term> .
This paper discusses a <term> method of analyzing metaphors </term> based on the existence of a small number of <term> generalized metaphor mappings </term> . Each <term> generalized metaphor </term> contains a <term> recognition network </term> , a <term> basic mapping </term> , additional <term> transfer mappings </term> , and an <term> implicit intention component </term> .
The base <term> parser </term> produces a set of <term> candidate parses </term> for each input <term> sentence </term> , with associated <term> probabilities </term> that define an initial <term> ranking </term> of these <term> parses </term> .
hide detail