#280In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser.
Korean-to-English translation system
</term>
consists of
two
<term>
core modules
</term>
,
<term>
language
#418The CCLINC Korean-to-English translation system consists of two core modules, language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame.
</term>
. We reconceptualize the task into
two
distinct phases . First , a very simple
#1370We reconceptualize the task into two distinct phases.
the form of
<term>
N-grams
</term>
) . Over
two
distinct datasets , we find that
<term>
indexing
#1532Over two distinct datasets, we find that indexing according to simple character bigrams produces a retrieval accuracy superior to any of the tested word N-gram models.
template-based generation component
</term>
,
two
<term>
rule-based sentence planners
</term>
#2089In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners.
rule-based sentence planners
</term>
, and
two
<term>
baseline sentence planners
</term>
.
#2095In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners.
experiments with an
<term>
EBMT system
</term>
. The
two
<term>
evaluation measures
</term>
of the
<term>
#3125The two evaluation measures of the BLEU score and the NIST score demonstrated the effect of using an out-of-domain bilingual corpus and the possibility of using the language model.
utility of this
<term>
constraint
</term>
in
two
different
<term>
algorithms
</term>
. The results
#3268We evaluate the utility of this constraint in two different algorithms.
procedure
</term>
is implemented as training
two
<term>
successive learners
</term>
. First
#3342The bootstrapping procedure is implemented as training two successive learners.
( SLM )
</term>
.
<term>
FSM
</term>
provides
two
strategies for
<term>
language understanding
#3519FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility.
<term>
accuracy difference
</term>
between the
two
approaches is only 14.0 % , and the difference
#4886On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage.
</term>
working in a ' synchronous ' way .
Two
<term>
hardness results
</term>
for the
<term>
#5720These models can be viewed as pairs of probabilistic context-free grammars working in a 'synchronous' way. Two hardness results for the class NP are reported, along with an exponential time lower-bound for certain classes of algorithms that are currently used in the literature.
parsing
</term>
of
<term>
sentences
</term>
with
two
or more
<term>
verbs
</term>
. Previous works
#6645The ambiguity resolution of right-side dependencies is essential for dependency parsing of sentences with two or more verbs.
large performance difference between the
two
<term>
models
</term>
. The results also revealed
#7771Experimental results showed that the proposed method achieves almost 60% accuracy and that there is not a large performance difference between the two models.
</term>
to our prior work . We evaluate across
two
<term>
corpora
</term>
( conversational telephone
#9513We evaluate across two corpora (conversational telephone speech and broadcast news speech) on both human transcriptions and speech recognition output.
results
</term>
. In this paper , we first train
two
<term>
statistical word alignment models
</term>
#9724In this paper, we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively, and then interpolate these two models to improve the domain-specific word alignment.
respectively , and then interpolate these
two
<term>
models
</term>
to improve the
<term>
domain-specific
#9745In this paper, we first train two statistical word alignment models with the large-scale out-of-domain corpus and the small-scale in-domain corpus respectively, and then interpolate these two models to improve the domain-specific word alignment.
</term>
has been investigated systematically on
two
different
<term>
language pairs
</term>
. The
#11356The correlation of the new measure with human judgment has been investigated systematically on two different language pairs.
dialogue
</term>
. We extend prior work in
two
ways . We first apply approaches that have
#11422We extend prior work in two ways.
predicting
<term>
subtopic boundaries
</term>
are
two
distinct tasks : ( 1 ) for predicting
<term>
#11478Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the top-level prediction task.