</term>
applying
<term>
mutual information
</term>
#1267The paper also proposes rule-reduction algorithm applying mutual information to reduce the error-correction rules.
tech,1-3-H01-1070,ak
error-correction rules
</term>
. Our
<term>
algorithm
</term>
reported more than 99 %
<term>
accuracy
#1278Ouralgorithm reported more than 99% accuracy in both language identification and key prediction.
tech,3-2-P01-1008,ak
We present an
<term>
unsupervised learning
algorithm
</term>
for
<term>
identification of paraphrases
#1783We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text.
tech,31-2-P01-1047,ak
sensitive logic
</term>
, and a
<term>
learning
algorithm
</term>
from
<term>
structured data
</term>
(
#1979Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and a learning algorithm from structured data (based on a typing-algorithm and type-unification).
tech,3-3-N03-1004,ak
present our
<term>
multi-level answer resolution
algorithm
</term>
that combines results from the
<term>
#2379We present our multi-level answer resolution algorithm that combines results from the answering agents at the question, passage, and/or answer levels.
tech,6-4-N03-1004,ak
effectiveness of our
<term>
answer resolution
algorithm
</term>
show a 35.0 %
<term>
relative improvement
#2405Experiments evaluating the effectiveness of our answer resolution algorithm show a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric.
tech,8-1-N03-1017,ak
translation model
</term>
and
<term>
decoding
algorithm
</term>
that enables us to evaluate and compare
#2549We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models.
tech,10-3-N03-2017,ak
<term>
constraint
</term>
in two different
<term>
algorithms
</term>
. The results show that it can provide
#3270We evaluate the utility of this constraint in two differentalgorithms.
tech,17-2-P03-1051,ak
uses it to bootstrap an
<term>
unsupervised
algorithm
</term>
to build the
<term>
Arabic word segmenter
#4658Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus.
tech,1-3-P03-1051,ak
unsegmented Arabic corpus
</term>
. The
<term>
algorithm
</term>
uses a
<term>
trigram language model
#4673Thealgorithm uses a trigram language model to determine the most probable morpheme sequence for a given input.
tech,9-5-P03-1051,ak
accuracy
</term>
, we use an
<term>
unsupervised
algorithm
</term>
for automatically acquiring new
<term>
#4718To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus.
tech,9-7-P03-1051,ak
state-of-the-art performance and the
<term>
algorithm
</term>
can be used for many
<term>
highly
#4776We believe this is a state-of-the-art performance and thealgorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest.
tech,4-1-H05-1012,ak
presents a
<term>
maximum entropy word alignment
algorithm
</term>
for
<term>
Arabic-English
</term>
based
#5283This paper presents a maximum entropy word alignment algorithm for Arabic-English based on supervised training data.
tech,3-5-H05-1012,ak
translation tests
</term>
. Performance of the
<term>
algorithm
</term>
is contrasted with
<term>
human annotation
#5355Performance of thealgorithm is contrasted with human annotation performance.
tech,20-3-H05-1101,ak
lower-bound
</term>
for certain classes of
<term>
algorithms
</term>
that are currently used in the literature
#5740Two hardness results for the class NP are reported, along with an exponential time lower-bound for certain classes ofalgorithms that are currently used in the literature.
tech,6-9-J05-1003,ak
The article also introduces a new
<term>
algorithm
</term>
for the
<term>
boosting approach
</term>
#8232The article also introduces a newalgorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data.
tech,8-10-J05-1003,ak
significant efficiency gains for the new
<term>
algorithm
</term>
over the obvious implementation of
#8260Experiments show significant efficiency gains for the newalgorithm over the obvious implementation of the boosting approach.
tech,3-6-P05-1067,ak
introduce a
<term>
polynomial time decoding
algorithm
</term>
for the
<term>
model
</term>
. We evaluate
#9884We introduce a polynomial time decoding algorithm for the model.
tech,1-4-P05-1069,ak
bigram features
</term>
. Our
<term>
training
algorithm
</term>
can easily handle millions of
<term>
#10008Our training algorithm can easily handle millions of features.
tech,8-2-E06-1004,ak
the last decade , a variety of
<term>
SMT
algorithms
</term>
have been built and empirically tested
#10902Over the last decade, a variety of SMT algorithms have been built and empirically tested whereas little is known about the computational complexity of some of the fundamental problems of SMT.