|
this purpose . In this paper we show how
|
two
|
standard outputs from
<term>
information
|
#280
In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. |
|
Korean-to-English translation system
</term>
consists of
|
two
|
<term>
core modules
</term>
,
<term>
language
|
#418
The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame. |
|
</term>
. We reconceptualize the task into
|
two
|
distinct phases . First , a very simple
|
#1370
We reconceptualize the task into two distinct phases. |
|
the form of
<term>
N-grams
</term>
) . Over
|
two
|
distinct
<term>
datasets
</term>
, we find
|
#1532
Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models. |
|
template-based generation component
</term>
,
|
two
|
<term>
rule-based sentence planners
</term>
|
#2088
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners. |
|
rule-based sentence planners
</term>
, and
|
two
|
<term>
baseline sentence planners
</term>
.
|
#2094
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners. |
|
experiments with an
<term>
EBMT system
</term>
. The
|
two
|
<term>
evaluation measures
</term>
of the
<term>
|
#3124
The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model. |
|
utility of this
<term>
constraint
</term>
in
|
two
|
different
<term>
algorithms
</term>
. The results
|
#3267
We evaluate the utility of this constraint in two different algorithms. |
|
procedure
</term>
is implemented as training
|
two
|
<term>
successive learners
</term>
. First
|
#3341
The bootstrapping procedure is implemented as training two successive learners. |
|
( SLM )
</term>
.
<term>
FSM
</term>
provides
|
two
|
strategies for
<term>
language understanding
|
#3518
FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. |
|
<term>
accuracy
</term>
difference between the
|
two
|
approaches is only 14.0 % , and the difference
|
#4884
On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. |
|
annotate an input
<term>
dataset
</term>
, and run
|
two
|
different
<term>
machine learning algorithms
|
#5206
We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. |
|
evaluated their performance by means of
|
two
|
experiments : coarse-level
<term>
clustering
|
#5470
We tested the clustering and filtering processes on electronic newsgroup discussions, and evaluated their performance by means of two experiments: coarse-level clustering and simple information retrieval. |
|
orthographical mapping ( DOM )
</term>
between
|
two
|
different
<term>
languages
</term>
is presented
|
#5767
In this paper, a novel framework for machine transliteration/back transliteration that allows us to carry out direct orthographical mapping (DOM) between two different languages is presented. |
|
sentences
</term>
that it contains . We give
|
two
|
estimates , a lower one and a higher one
|
#5932
We give two estimates, a lower one and a higher one. |
|
</term>
has been investigated systematically on
|
two
|
different
<term>
language pairs
</term>
. The
|
#10419
The correlation of the new measure with human judgment has been investigated systematically on two different language pairs. |
|
dialogue
</term>
. We extend prior work in
|
two
|
ways . We first apply approaches that have
|
#10485
We extend prior work in two ways. |
|
predicting subtopic boundaries
</term>
are
|
two
|
distinct tasks : ( 1 ) for predicting
<term>
|
#10541
Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task. |
|
the general preference of approach for the
|
two
|
tasks . This paper discusses two problems
|
#10643
We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks. |
|
for the two tasks . This paper discusses
|
two
|
problems that arise in the
<term>
Generation
|
#10649
This paper discusses two problems that arise in the Generation of Referring Expressions: (a) numeric-valued attributes, such as size or location; (b) perspective-taking in reference. |