#42The question is, however, how an interesting information piece would be found in a large database.
for this purpose . In this paper we show
how
two standard outputs from
<term>
information
#279In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser.
standard
<term>
text browser
</term>
. We describe
how
this information is used in a
<term>
prototype
#315We describe how this information is used in a prototype system designed to support information workers' access to a pharmaceutical news archive as part of their industry watch function.
context of
<term>
dialog systems
</term>
. We show
how
research in
<term>
generation
</term>
can be
#997We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques.
adapted to
<term>
dialog systems
</term>
, and
how
the high cost of hand-crafting
<term>
knowledge-based
#1009We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques.
elementary speech acts
</term>
and the decision of
how
to combine them into one or more
<term>
sentences
#1325Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences.
</term>
are limited . In this paper , we show
how
<term>
training data
</term>
can be supplemented
#3035In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.
</term>
. The demonstration will focus on
how
<term>
JAVELIN
</term>
processes
<term>
questions
#3668The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus.
information in English . We demonstrate
how
<term>
errors
</term>
in the
<term>
machine translations
#5248We demonstrate how errors in the machine translations of the input Arabic documents can be corrected by identifying and generating from such redundancy, focusing on noun phrases.
explicitly encodes and exploits information on
how
<term>
human judgments
</term>
are distributed
#5380The paper presents a Bayesian model for text summarization, which explicitly encodes and exploits information on how human judgments are distributed over the text.
results are presented , that demonstrate
how
the proposed method allows to better generalize
#5665Experimental results are presented, that demonstrate how the proposed method allows to better generalize from the training data.
questions
</term>
are time-sensitive ( e.g .
How
many victims have been found ? ) Judges
#5799Because of the dynamic nature of the stories, many questions are time-sensitive (e.g. How many victims have been found?)
translation systems
</term>
, and demonstrate
how
our application can be used by developers
#6063We incorporate this analysis into a diagnostic tool intended for developers of machine translation systems, and demonstrate how our application can be used by developers to explore patterns in machine translation output.
<term>
features
</term>
, without concerns about
how
these
<term>
features
</term>
interact or overlap
#8099The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account.
array-based data structure
</term>
. We show
how
sampling can be used to reduce the
<term>
#8818We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality.
<term>
chunking
</term>
. We also demonstrate
how
<term>
semantic information
</term>
such as
#9356We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance.
</term>
between
<term>
arguments
</term>
. We show
how
to build a joint
<term>
model
</term>
of
<term>
#10090We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative log-linear models.
statistical machine translation
</term>
, we show
how
<term>
paraphrases
</term>
in one language
#10187Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot.
translation probabilities
</term>
, and show
how
it can be refined to take
<term>
contextual
#10228We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account.
classifiers
</term>
. First , we investigate
how
well the
<term>
addressee
</term>
of a
<term>
#11194First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance and conversational context features.