this purpose . In this paper we show how two standard outputs from <term> information
Korean-to-English translation system </term> consists of two <term> core modules </term> , <term> language
</term> . We reconceptualize the task into two distinct phases . First , a very simple
the form of <term> N-grams </term> ) . Over two distinct <term> datasets </term> , we find
template-based generation component </term> , two <term> rule-based sentence planners </term>
rule-based sentence planners </term> , and two <term> baseline sentence planners </term> .
experiments with an <term> EBMT system </term> . The two <term> evaluation measures </term> of the <term>
utility of this <term> constraint </term> in two different <term> algorithms </term> . The results
procedure </term> is implemented as training two <term> successive learners </term> . First
( SLM ) </term> . <term> FSM </term> provides two strategies for <term> language understanding
<term> accuracy </term> difference between the two approaches is only 14.0 % , and the difference
annotate an input <term> dataset </term> , and run two different <term> machine learning algorithms
evaluated their performance by means of two experiments : coarse-level <term> clustering
orthographical mapping ( DOM ) </term> between two different <term> languages </term> is presented
sentences </term> that it contains . We give two estimates , a lower one and a higher one
</term> working in a ' synchronous ' way . Two <term> hardness </term> results for the class
</term> has been investigated systematically on two different <term> language pairs </term> . The
dialogue </term> . We extend prior work in two ways . We first apply approaches that have
predicting subtopic boundaries </term> are two distinct tasks : ( 1 ) for predicting <term>
the general preference of approach for the two tasks . This paper discusses two problems
hide detail