tech,4-2-H01-1070,bq |
paper also proposes
<term>
rule-reduction
|
algorithm
|
</term>
applying
<term>
mutual information
</term>
|
#1267
The paper also proposes rule-reduction algorithm applying mutual information to reduce the error-correction rules. |
tech,1-3-H01-1070,bq |
error-correction rules
</term>
. Our
<term>
|
algorithm
|
</term>
reported more than 99 %
<term>
accuracy
|
#1278
Ouralgorithm reported more than 99% accuracy in both language identification and key prediction. |
tech,3-2-P01-1008,bq |
We present an
<term>
unsupervised learning
|
algorithm
|
</term>
for
<term>
identification of paraphrases
|
#1782
We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. |
tech,31-2-P01-1047,bq |
sensitive logic
</term>
, and a
<term>
learning
|
algorithm
|
</term>
from
<term>
structured data
</term>
(
|
#1978
Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and a learning algorithm from structured data (based on a typing-algorithm and type-unification). |
tech,3-3-N03-1004,bq |
present our
<term>
multi-level answer resolution
|
algorithm
|
</term>
that combines results from the
<term>
|
#2378
We present our multi-level answer resolution algorithm that combines results from the answering agents at the question, passage, and/or answer levels. |
tech,6-4-N03-1004,bq |
effectiveness of our
<term>
answer resolution
|
algorithm
|
</term>
show a 35.0 % relative improvement
|
#2404
Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric. |
tech,8-1-N03-1017,bq |
translation model
</term>
and
<term>
decoding
|
algorithm
|
</term>
that enables us to evaluate and compare
|
#2548
We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. |
tech,17-2-P03-1051,bq |
uses it to bootstrap an
<term>
unsupervised
|
algorithm
|
</term>
to build the
<term>
Arabic word segmenter
|
#4656
Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. |
tech,1-3-P03-1051,bq |
unsegmented Arabic corpus
</term>
. The
<term>
|
algorithm
|
</term>
uses a
<term>
trigram language model
|
#4671
Thealgorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. |
tech,9-5-P03-1051,bq |
accuracy
</term>
, we use an
<term>
unsupervised
|
algorithm
|
</term>
for automatically acquiring new
<term>
|
#4716
To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. |
tech,9-7-P03-1051,bq |
state-of-the-art performance and the
<term>
|
algorithm
|
</term>
can be used for many
<term>
highly
|
#4774
We believe this is a state-of-the-art performance and thealgorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. |
tech,33-3-C04-1035,bq |
SLIPPER
</term>
, a
<term>
rule-based learning
|
algorithm
|
</term>
, and
<term>
TiMBL
</term>
, a
<term>
memory-based
|
#5217
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
tech,18-5-C04-1096,bq |
situations , and built a
<term>
generation
|
algorithm
|
</term>
based on the results . The evaluation
|
#5705
We conducted psychological experiments with 42 subjects to collect referring expressions in such situations, and built a generation algorithm based on the results. |
|
training material
</term>
available to the
|
algorithm
|
. Testing the
<term>
lemma-based model
</term>
|
#6053
The advantage of this novel method is that it clusters all inflected forms of an ambiguous word in one classifier, therefore augmenting the training material available to the algorithm. |
|
</term>
. The framework is composed of a novel
|
algorithm
|
to efficiently compute the
<term>
co-occurrence
|
#6328
The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms, an independence model, and a parametric affinity model. |
tech,14-5-P04-2005,bq |
a
<term>
second-order vector co-occurrence
|
algorithm
|
</term>
on standard
<term>
WSD datasets
</term>
|
#7011
We evaluated the topic signatures on a WSD task, where we trained a second-order vector co-occurrence algorithm on standard WSD datasets, with promising results. |
tech,4-1-H05-1012,bq |
presents a
<term>
maximum entropy word alignment
|
algorithm
|
</term>
for
<term>
Arabic-English
</term>
based
|
#7257
This paper presents a maximum entropy word alignment algorithm for Arabic-English based on supervised training data. |
tech,3-5-H05-1012,bq |
</term>
.
<term>
Performance
</term>
of the
<term>
|
algorithm
|
</term>
is contrasted with
<term>
human annotation
|
#7329
Performance of thealgorithm is contrasted with human annotation performance. |
tech,6-9-J05-1003,bq |
The article also introduces a new
<term>
|
algorithm
|
</term>
for the
<term>
boosting approach
</term>
|
#8867
The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. |
tech,8-10-J05-1003,bq |
significant efficiency gains for the new
<term>
|
algorithm
|
</term>
over the obvious
<term>
implementation
|
#8895
Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach. |