tech,4-2-H01-1058,bq |
. We find that simple
<term>
interpolation
|
methods
|
</term>
, like
<term>
log-linear and linear
|
#1049
We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle. |
tech,6-2-P01-1004,bq |
segment order-sensitive string comparison
|
methods
|
</term>
, and run each over both
<term>
character
|
#1500
We take a selection of both bag-of-words and segment order-sensitive string comparison methods, and run each over both character- and word-segmented data, in combination with a range of local segment contiguity models (in the form of N-grams). |
tech,7-4-P01-1004,bq |
<term>
configuration
</term>
,
<term>
bag-of-words
|
methods
|
</term>
are shown to be equivalent to
<term>
|
#1567
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
tech,15-4-P01-1004,bq |
equivalent to
<term>
segment order-sensitive
|
methods
|
</term>
in terms of
<term>
retrieval accuracy
|
#1576
Further,in their optimum configuration, bag-of-words methods are shown to be equivalent to segment order-sensitive methods in terms of retrieval accuracy, but much faster. |
|
current systems use manual or semi-automatic
|
methods
|
to collect
<term>
paraphrases
</term>
. We
|
#1772
While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. |
tech,5-1-N03-1004,bq |
Motivated by the success of
<term>
ensemble
|
methods
|
</term>
in
<term>
machine learning
</term>
and
|
#2312
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
tech,8-3-N03-1026,bq |
the use of standard
<term>
parser evaluation
|
methods
|
</term>
for automatically evaluating the
<term>
|
#2848
Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality of sentence condensation systems. |
other,17-4-P03-1005,bq |
kernel functions
</term>
and
<term>
baseline
|
methods
|
</term>
. Previous research has demonstrated
|
#3882
The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods. |
|
</term>
and
<term>
nearest neighbour
</term>
|
methods
|
. In contrast to previous work , we particularly
|
#3923
We describe a new approach which involves clustering subcategorization frame (SCF) distributions using the Information Bottleneck and nearest neighbourmethods. |
|
dialogue corpora
</term>
. Unlike conventional
|
methods
|
that use
<term>
hand-crafted rules
</term>
|
#4236
Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process. |
tech,11-2-C04-1080,bq |
comprehensive comparison of
<term>
unsupervised
|
methods
|
for part-of-speech tagging
</term>
, noting
|
#5538
Along the way, we present the first comprehensive comparison of unsupervised methods for part-of-speech tagging, noting that published results to date have not been comparable across corpora or lexicons. |
|
process
</term>
. We evaluate the proposed
|
methods
|
through several
<term>
transliteration/back
|
#5806
We evaluate the proposed methods through several transliteration/back transliteration experiments for English/Chinese and English/Japanese language pairs. |
tech,20-2-H05-1012,bq |
mixture of
<term>
supervised and unsupervised
|
methods
|
</term>
yields superior
<term>
performance
</term>
|
#7289
We demonstrate that it is feasible to create training material for problems in machine translation and that a mixture of supervised and unsupervised methods yields superior performance. |
tech,4-3-H05-1117,bq |
's response . The lack of automatic
<term>
|
methods
|
</term>
for
<term>
scoring system output
</term>
|
#7574
The lack of automaticmethods for scoring system output is an impediment to progress in the field, which we address with this work. |
measure(ment),8-2-I05-5003,bq |
of applying standard
<term>
MT evaluation
|
methods
|
( BLEU , NIST , WER and PER )
</term>
to
|
#8347
This paper investigates the utility of applying standard MT evaluation methods (BLEU, NIST, WER and PER) to building classifiers to predict semantic equivalence and entailment. |
tech,21-11-J05-1003,bq |
efficiency — to work on
<term>
feature selection
|
methods
|
</term>
within
<term>
log-linear ( maximum-entropy
|
#8928
We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods within log-linear (maximum-entropy) models. |
tech,3-5-P05-1074,bq |
our
<term>
paraphrase extractio and ranking
|
methods
|
</term>
using a set of
<term>
manual word alignments
|
#9758
We evaluate our paraphrase extractio and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments. |
tech,0-1-P06-1013,bq |
Methods course
</term>
.
<term>
Combination
|
methods
|
</term>
are an effective way of improving
|
#10969
Combination methods are an effective way of improving system performance. |
tech,1-4-P06-1013,bq |
WSD systems
</term>
. Our
<term>
combination
|
methods
|
</term>
rely on
<term>
predominant senses
</term>
|
#11011
Our combination methods rely on predominant senses which are derived automatically from raw text. |
tech,15-3-P06-2012,bq |
</term>
outperforms the other
<term>
clustering
|
methods
|
</term>
. This paper proposes a novel method
|
#11390
Experiment results on ACE corpora show that this spectral clustering based approach outperforms the other clustering methods. |