The question is , however , how an interesting information piece would be found in a <term> large database </term> .
In this paper we show how two standard outputs from <term> information extraction ( IE ) systems </term> - <term> named entity annotations </term> and <term> scenario templates </term> - can be used to enhance access to <term> text collections </term> via a standard <term> text browser </term> .
We describe how this information is used in a <term> prototype system </term> designed to support <term> information workers </term> ' access to a <term> pharmaceutical news archive </term> as part of their <term> industry watch </term> function .
We show how research in <term> generation </term> can be adapted to <term> dialog systems </term> , and how the high cost of hand-crafting <term> knowledge-based generation systems </term> can be overcome by employing <term> machine learning techniques </term> .
We show how research in <term> generation </term> can be adapted to <term> dialog systems </term> , and how the high cost of hand-crafting <term> knowledge-based generation systems </term> can be overcome by employing <term> machine learning techniques </term> .
<term> Sentence planning </term> is a set of inter-related but distinct tasks , one of which is <term> sentence scoping </term> , i.e. the choice of <term> syntactic structure </term> for elementary <term> speech acts </term> and the decision of how to combine them into one or more <term> sentences </term> .
In this paper , we show how <term> training data </term> can be supplemented with <term> text </term> from the <term> web </term> filtered to match the <term> style </term> and/or <term> topic </term> of the target <term> recognition task </term> , but also that it is possible to get bigger performance gains from the <term> data </term> by using <term> class-dependent interpolation </term> of <term> N-grams </term> .
The demonstration will focus on how <term> JAVELIN </term> processes <term> questions </term> and retrieves the most likely <term> answer candidates </term> from the given <term> text corpus </term> .
Finally , we show how this new <term> tagger </term> achieves state-of-the-art results in a <term> supervised , non-training intensive framework </term> .
We demonstrate how errors in the <term> machine translations </term> of the input <term> Arabic documents </term> can be corrected by identifying and generating from such <term> redundancy </term> , focusing on <term> noun phrases </term> .
Experimental results are presented , that demonstrate how the proposed <term> method </term> allows to better generalize from the <term> training data </term> .
We incorporate this analysis into a <term> diagnostic tool </term> intended for <term> developers </term> of <term> machine translation systems </term> , and demonstrate how our application can be used by <term> developers </term> to explore <term> patterns </term> in <term> machine translation output </term> .
The strength of our <term> approach </term> is that it allows a <term> tree </term> to be represented as an arbitrary set of <term> features </term> , without concerns about how these <term> features </term> interact or overlap and without the need to define a <term> derivation </term> or a <term> generative model </term> which takes these <term> features </term> into account .
We show how <term> sampling </term> can be used to reduce the <term> retrieval time </term> by orders of magnitude with no loss in <term> translation quality </term> .
Using <term> alignment techniques </term> from <term> phrase-based statistical machine translation </term> , we show how <term> paraphrases </term> in one <term> language </term> can be identified using a <term> phrase </term> in another language as a pivot .
We define a <term> paraphrase probability </term> that allows <term> paraphrases </term> extracted from a <term> bilingual parallel corpus </term> to be ranked using <term> translation probabilities </term> , and show how it can be refined to take <term> contextual information </term> into account .
First , we investigate how well the <term> addressee </term> of a <term> dialogue act </term> can be predicted based on <term> gaze </term> , <term> utterance </term> and <term> conversational context features </term> .
Furthermore , we will show how some <term> evaluation measures </term> can be improved by the introduction of <term> word-dependent substitution costs </term> .
<term> FERRET </term> utilizes a novel approach to <term> Q/A </term> known as <term> predictive questioning </term> which attempts to identify the <term> questions </term> ( and <term> answers </term> ) that <term> users </term> need by analyzing how a <term> user </term> interacts with a system while gathering information related to a particular scenario .
Unlike <term> logic </term> , the <term> theory </term> specifies how different information in <term> memory </term> affects the certainty of the conclusions drawn .
hide detail