Given the development of <term> storage media and networks </term> one could just record and store a <term> conversation </term> for documentation .
The question is , however , how an interesting information piece would be found in a <term> large database </term> .
Traditional <term> information retrieval techniques </term> use a <term> histogram </term> of <term> keywords </term> as the <term> document representation </term> but <term> oral communication </term> may offer additional <term> indices </term> such as the time and place of the rejoinder and the attendance .
Several extensions of this basic idea are being discussed and/or evaluated : Similar to activities one can define subsets of larger <term> database </term> and detect those automatically which is shown on a large <term> database </term> of <term> TV shows </term> .
To support engaging human users in robust , <term> mixed-initiative speech dialogue interactions </term> which reach beyond current capabilities in <term> dialogue systems </term> , the <term> DARPA Communicator program </term> [ 1 ] is funding the development of a <term> distributed message-passing infrastructure </term> for <term> dialogue systems </term> which all <term> Communicator </term> participants are using .
In this presentation , we describe the features of and <term> requirements </term> for a genuinely useful <term> software infrastructure </term> for this purpose .
In this paper we show how two standard outputs from <term> information extraction ( IE ) systems </term> - <term> named entity annotations </term> and <term> scenario templates </term> - can be used to enhance access to <term> text collections </term> via a standard <term> text browser </term> .
We describe how this information is used in a <term> prototype system </term> designed to support <term> information workers </term> ' access to a <term> pharmaceutical news archive </term> as part of their <term> industry watch </term> function .
We describe how this information is used in a <term> prototype system </term> designed to support <term> information workers </term> ' access to a <term> pharmaceutical news archive </term> as part of their <term> industry watch </term> function .
We also report results of a preliminary , <term> qualitative user evaluation </term> of the <term> system </term> , which while broadly positive indicates further work needs to be done on the <term> interface </term> to make <term> users </term> aware of the increased potential of <term> IE-enhanced text browsers </term> .
At MIT Lincoln Laboratory , we have been developing a <term> Korean-to-English machine translation system </term><term> CCLINC ( Common Coalition Language System at Lincoln Laboratory ) </term> .
The <term> CCLINC Korean-to-English translation system </term> consists of two <term> core modules </term> , <term> language understanding and generation modules </term> mediated by a <term> language neutral meaning representation </term> called a <term> semantic frame </term> .
The <term> CCLINC Korean-to-English translation system </term> consists of two <term> core modules </term> , <term> language understanding and generation modules </term> mediated by a <term> language neutral meaning representation </term> called a <term> semantic frame </term> .
The key features of the <term> system </term> include : ( i ) Robust efficient <term> parsing </term> of <term> Korean </term> ( a <term> verb final language </term> with <term> overt case markers </term> , relatively <term> free word order </term> , and frequent omissions of <term> arguments </term> ) .
This , the first experiment in a series of experiments , looks at the <term> intelligibility </term> of <term> MT output </term> .
Subjects were given a set of up to six extracts of <term> translated newswire text </term> .
The subjects were given three minutes per extract to determine whether they believed the sample output to be an <term> expert human translation </term> or a <term> machine translation </term> .
The results of this experiment , along with a preliminary analysis of the factors involved in the decision making process will be presented here .
<term> Listen-Communicate-Show ( LCS ) </term> is a new paradigm for <term> human interaction with data sources </term> .
We integrate a <term> spoken language understanding system </term> with <term> intelligent mobile agents </term> that mediate between <term> users </term> and <term> information sources </term> .
hide detail