other,12-2-H01-1042,bq |
information about both the
<term>
human language
|
learning
|
process
</term>
, the
<term>
translation process
|
#595
We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. |
other,1-4-H01-1042,bq |
of
<term>
MT output
</term>
. A
<term>
language
|
learning
|
experiment
</term>
showed that
<term>
assessors
|
#630
A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. |
tech,28-4-H01-1055,bq |
can be overcome by employing
<term>
machine
|
learning
|
techniques
</term>
. In this paper , we address
|
#1024
We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. |
tech,3-2-P01-1008,bq |
</term>
. We present an
<term>
unsupervised
|
learning
|
algorithm
</term>
for
<term>
identification
|
#1781
We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. |
tech,31-2-P01-1047,bq |
resource sensitive logic
</term>
, and a
<term>
|
learning
|
algorithm
</term>
from
<term>
structured data
|
#1977
Our logical definition leads to a neat relation to categorial grammar, (yielding a treatment of Montague semantics), a parsing-as-deduction in a resource sensitive logic, and alearning algorithm from structured data (based on a typing-algorithm and type-unification). |
tech,5-1-P01-1070,bq |
describe a set of
<term>
supervised machine
|
learning
|
</term>
experiments centering on the construction
|
#2131
We describe a set of supervised machine learning experiments centering on the construction of statistical models of WH-questions. |
tech,8-1-N03-1004,bq |
<term>
ensemble methods
</term>
in
<term>
machine
|
learning
|
</term>
and other areas of
<term>
natural language
|
#2315
Motivated by the success of ensemble methods in machine learning and other areas of natural language processing, we developed a multi-strategy and multi-source approach to question answering which is based on combining the results from different answering agents searching for answers in multiple corpora. |
tech,27-3-N03-1017,bq |
relatively simple means :
<term>
heuristic
|
learning
|
</term>
of
<term>
phrase translations
</term>
|
#2616
Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. |
|
phrase translations
</term>
. Surprisingly ,
|
learning
|
<term>
phrases
</term>
longer than three
<term>
|
#2632
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
</term>
longer than three
<term>
words
</term>
and
|
learning
|
<term>
phrases
</term>
from
<term>
high-accuracy
|
#2639
Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. |
|
simple
<term>
unsupervised technique
</term>
for
|
learning
|
<term>
morphology
</term>
by identifying
<term>
|
#3160
We describe a simple unsupervised technique for learning morphology by identifying hubs in an automaton. |
tech,18-3-P03-1002,bq |
; and ( 2 )
<term>
inductive decision tree
|
learning
|
</term>
. The experimental results prove
|
#3771
It is based on: (1) an extended set of features; and (2) inductive decision tree learning. |
tech,8-4-P03-1033,bq |
automatically derived by
<term>
decision tree
|
learning
|
</term>
using real
<term>
dialogue data
</term>
|
#4361
Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. |
tech,4-1-P03-1050,bq |
This paper presents an
<term>
unsupervised
|
learning
|
approach
</term>
to building a
<term>
non-English
|
#4435
This paper presents an unsupervised learning approach to building a non-English (Arabic) stemmer. |
tech,19-1-P03-1058,bq |
data
</term>
required for
<term>
supervised
|
learning
|
</term>
. In this paper , we evaluate an
|
#4819
A central problem of word sense disambiguation (WSD) is the lack of manually sense-tagged data required for supervised learning. |
tech,4-1-C04-1035,bq |
difference . This paper presents a
<term>
machine
|
learning
|
</term>
approach to bare
<term>
sluice disambiguation
|
#5154
This paper presents a machine learning approach to bare sluice disambiguation in dialogue. |
tech,26-3-C04-1035,bq |
</term>
, and run two different
<term>
machine
|
learning
|
algorithms
</term>
:
<term>
SLIPPER
</term>
,
|
#5209
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
tech,33-3-C04-1035,bq |
:
<term>
SLIPPER
</term>
, a
<term>
rule-based
|
learning
|
algorithm
</term>
, and
<term>
TiMBL
</term>
|
#5216
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
tech,5-1-P04-2010,bq |
This paper presents a novel
<term>
ensemble
|
learning
|
approach
</term>
to
<term>
resolving German
|
#7027
This paper presents a novel ensemble learning approach to resolving German pronouns. |
tech,46-5-E06-1035,bq |
top-level boundaries
</term>
, the
<term>
machine
|
learning
|
approach
</term>
that combines
<term>
lexical-cohesion
|
#10573
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |