#595We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems.
other,1-4-H01-1042,ak
of
<term>
MT output
</term>
. A
<term>
language
learning
experiment
</term>
showed that
<term>
assessors
#630A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words.
tech,28-4-H01-1055,ak
can be overcome by employing
<term>
machine
learning
techniques
</term>
. In this paper , we address
#1024We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques.
tech,3-2-P01-1008,ak
</term>
. We present an
<term>
unsupervised
learning
algorithm
</term>
for
<term>
identification
#1782We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text.
tech,31-2-P01-1047,ak
resource sensitive logic
</term>
, and a
<term>
learning
algorithm
</term>
from
<term>
structured data
#1978Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and alearning algorithm from structured data (based on a typing-algorithm and type-unification).
tech,5-1-P01-1070,ak
describe a set of
<term>
supervised machine
learning
experiments
</term>
centering on the construction
#2132We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions.
tech,8-1-N03-1004,ak
<term>
ensemble methods
</term>
in
<term>
machine
learning
</term>
and other areas of
<term>
natural language
#2316Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora.
tech,27-3-N03-1017,ak
relatively simple means :
<term>
heuristic
learning
</term>
of
<term>
phrase translations
</term>
#2617Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations.
phrase translations
</term>
. Surprisingly ,
learning
<term>
phrases
</term>
longer than three
<term>
#2633Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance.
</term>
longer than three
<term>
words
</term>
and
learning
<term>
phrases
</term>
from
<term>
high-accuracy
#2640Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance.
simple
<term>
unsupervised technique
</term>
for
learning
<term>
morphology
</term>
by identifying
<term>
#3161We describe a simple unsupervised technique for learning morphology by identifying hubs in an automaton.
tech,18-3-P03-1002,ak
; and ( 2 )
<term>
inductive decision tree
learning
</term>
. The experimental results prove
#3772It is based on: (1) an extended set of features; and (2) inductive decision tree learning.
tech,8-4-P03-1033,ak
automatically derived by
<term>
decision tree
learning
</term>
using real
<term>
dialogue data
</term>
#4363Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system.
tech,4-1-P03-1050,ak
This paper presents an
<term>
unsupervised
learning
approach
</term>
to building a
<term>
non-English
#4437This paper presents an unsupervised learning approach to building a non-English (Arabic) stemmer.
tech,19-1-P03-1058,ak
data
</term>
required for
<term>
supervised
learning
</term>
. In this paper , we evaluate an
#4821A central problem of word sense disambiguation (WSD) is the lack of manually sense-tagged data required for supervised learning.
tech,10-4-I05-2044,ak
dependency parser
</term>
based on
<term>
SVM
learning
</term>
. The
<term>
left-side dependents
</term>
#6686This paper proposes a two-phase shift-reduce dependency parser based on SVM learning.
tech,6-3-P05-1018,ak
coherence assessment
</term>
as a
<term>
ranking
learning
problem
</term>
and show that the proposed
#8634We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function.
representation
</term>
supports the effective
learning
of a
<term>
ranking function
</term>
. Our
#8646We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function.
tech,18-3-P05-1046,ak
text
</term>
, general
<term>
unsupervised HMM
learning
</term>
fails to learn useful structure in
#9091Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains.
tech,1-2-P05-2008,ak
positive or negative . Traditional
<term>
machine
learning
techniques
</term>
have been applied to this
#10420Traditional machine learning techniques have been applied to this problem with reasonable success, but they have been shown to work well only when there is a good match between the training and test data with respect to topic.