|
on
<term>
queries
</term>
containing them . I
|
show
|
that the
<term>
performance
</term>
of a
<term>
|
#1872
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. |
|
the algorithm on a
<term>
corpus
</term>
, and
|
show
|
that it reduces the degree of
<term>
ambiguity
|
#11186
We evaluate the algorithm on a corpus, and show that it reduces the degree of ambiguity significantly while taking negligible runtime. |
|
rates
</term>
of approx 90 % . The results
|
show
|
that the
<term>
features
</term>
in terms of
|
#5242
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
form a highly accurate one . Experiments
|
show
|
that this approach is superior to a single
|
#7057
Experiments show that this approach is superior to a single decision-tree classifier. |
|
information
</term>
. The
<term>
classifiers
</term>
|
show
|
little
<term>
gain
</term>
from information
|
#10314
The classifiersshow little gain from information about meeting context. |
|
corpus
</term>
. The results of the experiment
|
show
|
that in most of the cases the
<term>
cooccurrence
|
#16696
The results of the experiment show that in most of the cases the cooccurrence statistics indeed reflect the semantic constraints and thus provide a basis for a useful disambiguation tool. |
|
the
<term>
parsing data
</term>
. Experiments
|
show
|
significant efficiency gains for the new
|
#8888
Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. |
|
<term>
training speakers
</term>
. Second , we
|
show
|
a significant improvement for
<term>
speaker
|
#17122
Second, we show a significant improvement for speaker adaptation (SA) using the new SI corpus and a small amount of speech from the new (target) speaker. |
|
pairs
</term>
. The experimental results will
|
show
|
that it significantly outperforms state-of-the-art
|
#10428
The experimental results will show that it significantly outperforms state-of-the-art approaches in sentence-level correlation. |
|
of
<term>
monolingual UCG
</term>
, we will
|
show
|
how the two can be integrated , and present
|
#15139
After introducing this approach to MT system design, and the basics of monolingual UCG, we will show how the two can be integrated, and present an example from an implemented bi-directional Engllsh-Spanish fragment. |
|
previous papers [ Zernik87 ] . Second , we
|
show
|
in this paper how a
<term>
lexical hierarchy
|
#15871
Second, we show in this paper how a lexical hierarchy is used in predicting new linguistic concepts. |
|
on all four applications are provided to
|
show
|
the effectiveness of the
<term>
MAP estimation
|
#19144
New experimental results on all four applications are provided to show the effectiveness of the MAP estimation approach. |
|
suffix array-based data structure
</term>
. We
|
show
|
how
<term>
sampling
</term>
can be used to
|
#9179
We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. |
|
provide experimental results that clearly
|
show
|
the need for a
<term>
dynamic language model
|
#1137
We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. |
|
</term>
for this purpose . In this paper we
|
show
|
how two standard outputs from
<term>
information
|
#278
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
|
sophisticated representations
</term>
, and
|
show
|
that a
<term>
statistically fitted rule-based
|
#20119
We then proceed to repeat results which show that standard statistical models are not particularly suitable for exploiting linguistically sophisticated representations, and show that a statistically fitted rule-based model provides significantly improved performance for sophisticated representations. |
|
quadratic time
</term>
. Furthermore , we will
|
show
|
how some
<term>
evaluation measures
</term>
|
#10389
Furthermore, we will show how some evaluation measures can be improved by the introduction of word-dependent substitution costs. |
|
machine translation system
</term>
. We also
|
show
|
that a good-quality
<term>
MT system
</term>
|
#9070
We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. |
|
examine a broad range of
<term>
texts
</term>
to
|
show
|
how the distribution of
<term>
demonstrative
|
#15201
We examine a broad range of texts to show how the distribution of demonstrative forms and functions is genre dependent. |
|
We then proceed to repeat results which
|
show
|
that standard
<term>
statistical models
</term>
|
#20103
We then proceed to repeat results which show that standard statistical models are not particularly suitable for exploiting linguistically sophisticated representations, and show that a statistically fitted rule-based model provides significantly improved performance for sophisticated representations. |