|
</term>
for this purpose . In this paper we
|
show
|
how two standard outputs from
<term>
information
|
#278
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
|
context of
<term>
dialog systems
</term>
. We
|
show
|
how research in
<term>
generation
</term>
can
|
#996
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
|
provide experimental results that clearly
|
show
|
the need for a
<term>
dynamic language model
|
#1137
We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. |
|
learned from
<term>
training data
</term>
. We
|
show
|
that the trained
<term>
SPR
</term>
learns
|
#1434
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
|
on
<term>
queries
</term>
containing them . I
|
show
|
that the
<term>
performance
</term>
of a
<term>
|
#1872
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
<term>
baseline sentence planners
</term>
. We
|
show
|
that the
<term>
trainable sentence planner
|
#2100
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
|
<term>
answer resolution algorithm
</term>
|
show
|
a 35.0 % relative improvement over our
<term>
|
#2405
Experiments evaluating the effectiveness of our answer resolution algorithmshow a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric. |
|
speech
</term>
are limited . In this paper , we
|
show
|
how
<term>
training data
</term>
can be supplemented
|
#3033
In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. |
|
different
<term>
algorithms
</term>
. The results
|
show
|
that it can provide a significant improvement
|
#3273
The results show that it can provide a significant improvement in alignment quality. |
|
underlying
<term>
word alignment
</term>
. We
|
show
|
experimental results on
<term>
block selection
|
#3462
We show experimental results on block selection criteria based on unigram counts and phrase length. |
|
twenty
<term>
Switchboard dialogues
</term>
and
|
show
|
that it compares well to
<term>
Byron 's
|
#4027
We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron's (2002) manually tuned system. |
|
rates
</term>
of approx 90 % . The results
|
show
|
that the
<term>
features
</term>
in terms of
|
#5242
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
probabilities
</term>
is unstable . Finally , we
|
show
|
how this new
<term>
tagger
</term>
achieves
|
#5598
Finally, we show how this new tagger achieves state-of-the-art results in a supervised, non-training intensive framework. |
|
task of
<term>
email summarization
</term>
. We
|
show
|
that various
<term>
features
</term>
based
|
#6283
We show that various features based on the structure of email-threads can be used to improve upon lexical similarity of discourse segments for question-answer pairing. |
|
Chinese-to-English translation task
</term>
. Our results
|
show
|
that
<term>
MBR decoding
</term>
can be used
|
#6628
Our results show that MBR decoding can be used to tune statistical MT performance for specific loss functions. |
|
form a highly accurate one . Experiments
|
show
|
that this approach is superior to a single
|
#7057
Experiments show that this approach is superior to a single decision-tree classifier. |
|
in the
<term>
sentence
</term>
. Our results
|
show
|
that
<term>
MT evaluation techniques
</term>
|
#8399
Our results show that MT evaluation techniques are able to produce useful features for paraphrase classification and to a lesser extent entailment. |
|
the
<term>
parsing data
</term>
. Experiments
|
show
|
significant efficiency gains for the new
|
#8888
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. |
|
machine translation system
</term>
. We also
|
show
|
that a good-quality
<term>
MT system
</term>
|
#9070
We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. |
|
suffix array-based data structure
</term>
. We
|
show
|
how
<term>
sampling
</term>
can be used to
|
#9179
We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. |