#1049We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle.
performance
</term>
further . We suggest a
method
that mimics the behavior of the
<term>
oracle
#1155We suggest a method that mimics the behavior of the oracle using a neural network or a decision tree.
</term>
or a
<term>
decision tree
</term>
. The
method
amounts to tagging
<term>
LMs
</term>
with
<term>
#1173The method amounts to tagging LMs with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence.
tech,6-2-P01-1004,ak
segment order-sensitive string comparison
methods
</term>
, and run each over both character
#1500We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams).
tech,7-4-P01-1004,ak
optimum configuration ,
<term>
bag-of-words
methods
</term>
are shown to be equivalent to
<term>
#1567Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster.
tech,15-4-P01-1004,ak
equivalent to
<term>
segment order-sensitive
methods
</term>
in terms of
<term>
retrieval accuracy
#1576Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster.
results of a practical evaluation of this
method
on a
<term>
wide coverage English grammar
#1744The results of a practical evaluation of this method on a wide coverage English grammar are given.
tech,16-1-P01-1008,ak
systems use
<term>
manual or semi-automatic
methods
</term>
to collect
<term>
paraphrases
</term>
#1773While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases.
target variables . This paper describes a
method
for
<term>
utterance classification
</term>
#2210This paper describes a method for utterance classification that does not require manual transcription of training data.
</term>
of
<term>
training data
</term>
. The
method
combines
<term>
domain independent acoustic
#2225The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription.
<term>
manual transcription
</term>
. In our
method
,
<term>
unsupervised training
</term>
is first
#2258In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier.
<term>
classification accuracy
</term>
of the
method
is evaluated on three different
<term>
spoken
#2296The classification accuracy of the method is evaluated on three different spoken language system domains.
tech,5-1-N03-1004,ak
Motivated by the success of
<term>
ensemble
methods
</term>
in
<term>
machine learning
</term>
and
#2313Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora.
tech,8-3-N03-1026,ak
the use of standard
<term>
parser evaluation
methods
</term>
for automatically evaluating the
<term>
#2849Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems.
<term>
HDAGs
</term>
. We applied the proposed
method
to
<term>
question classification
</term>
and
#3844We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function.
tech,17-4-P03-1005,ak
kernel functions
</term>
and
<term>
baseline
methods
</term>
. Previous research has demonstrated
#3883The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods.
tech,16-2-P03-1009,ak
Information Bottleneck and nearest neighbour
methods
</term>
. In contrast to previous work ,
#3924We describe a new approach which involves clustering subcategorization frame (SCF) distributions using the Information Bottleneck and nearest neighbour methods.
</term>
can be improved . This paper proposes a
method
for resolving this
<term>
ambiguity
</term>
#4222This paper proposes a method for resolving this ambiguity based on statistical information obtained from dialogue corpora.
dialogue corpora
</term>
. Unlike conventional
methods
that use
<term>
hand-crafted rules
</term>
#4238Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process.
<term>
hand-crafted rules
</term>
, the proposed
method
enables easy design of the
<term>
discourse
#4246Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process.