|
component performance
</term>
. We describe
|
our
|
use of this approach in numerous fielded
|
#1225
We describe our use of this approach in numerous fielded user studies conducted with the U.S. military. |
|
much faster . We also provide evidence that
|
our
|
findings are scalable . The theoretical
|
#1592
We also provide evidence that our findings are scalable. |
|
the
<term>
predictive performance
</term>
of
|
our
|
<term>
models
</term>
, including the influence
|
#2180
We report on different aspects of the predictive performance of our models, including the influence of various training and testing factors on predictive performance, and examine the relationships among the target variables. |
|
requiring
<term>
manual transcription
</term>
. In
|
our
|
method ,
<term>
unsupervised training
</term>
|
#2256
In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. |
|
statistical techniques
</term>
. We present
|
our
|
<term>
multi-level answer resolution algorithm
|
#2374
We present our multi-level answer resolution algorithm that combines results from the answering agents at the question, passage, and/or answer levels. |
|
Experiments evaluating the effectiveness of
|
our
|
<term>
answer resolution algorithm
</term>
|
#2401
Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric. |
|
</term>
show a 35.0 % relative improvement over
|
our
|
<term>
baseline system
</term>
in the number
|
#2412
Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric. |
|
basis of an
<term>
ontology
</term>
. We apply
|
our
|
<term>
system
</term>
to the task of
<term>
scoring
|
#2458
We apply our system to the task of scoring alternative speech recognition hypotheses (SRH) in terms of their semantic coherence. |
|
recognition hypotheses
</term>
. An evaluation of
|
our
|
<term>
system
</term>
against the
<term>
annotated
|
#2504
An evaluation of our system against the annotated data shows that, it successfully classifies 73.2% in a German corpus of 2.284 SRHs as either coherent or incoherent (given a baseline of 54.55%). |
|
phrase-based translation models
</term>
. Within
|
our
|
framework , we carry out a large number
|
#2565
Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models. |
|
phrases
</term>
degrades the performance of
|
our
|
<term>
systems
</term>
. In this paper , we
|
#2664
Learning only syntactically motivated phrases degrades the performance of our systems. |
|
hubs
</term>
in an
<term>
automaton
</term>
. For
|
our
|
purposes , a
<term>
hub
</term>
is a
<term>
node
|
#3170
For our purposes, a hub is a node in a graph with in-degree greater than one and out-degree greater than one. |
|
argument structures
</term>
, which is central to
|
our
|
<term>
IE paradigm
</term>
. It is based on
|
#3746
We also introduce a new way of automatically identifying predicate argument structures, which is central to our IE paradigm. |
|
learning
</term>
. The experimental results prove
|
our
|
claim that accurate
<term>
predicate-argument
|
#3777
The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results. |
|
lists
</term>
. Experimental results validate
|
our
|
hypothesis . This paper concerns the
<term>
|
#4123
Experimental results validate our hypothesis. |
|
system
</term>
that has been developed at
|
our
|
laboratory . Experimental evaluation shows
|
#4400
Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. |
|
features
</term>
in terms of which we formulate
|
our
|
<term>
heuristic principles
</term>
have significant
|
#5252
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
that
<term>
rules
</term>
that closely resemble
|
our
|
<term>
Horn clauses
</term>
can be learnt automatically
|
#5266
The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. |
|
retrieval
</term>
. Our evaluation shows that
|
our
|
<term>
filtering mechanism
</term>
has a significant
|
#5484
Our evaluation shows that our filtering mechanism has a significant positive effect on both tasks. |
|
for one
<term>
meaning
</term>
. According to
|
our
|
assumption , most of the words with similar
|
#6160
According to our assumption, most of the words with similar context features in each author's corpus tend not to be synonymous expressions. |