this purpose . In this paper we show how two standard outputs from <term> information
Korean-to-English translation system </term> consists of two <term> core modules </term> , <term> language
</term> . We reconceptualize the task into two distinct phases . First , a very simple
the form of <term> N-grams </term> ) . Over two distinct datasets , we find that <term> indexing
template-based generation component </term> , two <term> rule-based sentence planners </term>
rule-based sentence planners </term> , and two <term> baseline sentence planners </term> .
experiments with an <term> EBMT system </term> . The two <term> evaluation measures </term> of the <term>
utility of this <term> constraint </term> in two different <term> algorithms </term> . The results
procedure </term> is implemented as training two <term> successive learners </term> . First
( SLM ) </term> . <term> FSM </term> provides two strategies for <term> language understanding
<term> accuracy difference </term> between the two approaches is only 14.0 % , and the difference
</term> working in a ' synchronous ' way . Two <term> hardness results </term> for the <term>
parsing </term> of <term> sentences </term> with two or more <term> verbs </term> . Previous works
large performance difference between the two <term> models </term> . The results also revealed
</term> to our prior work . We evaluate across two <term> corpora </term> ( conversational telephone
results </term> . In this paper , we first train two <term> statistical word alignment models </term>
respectively , and then interpolate these two <term> models </term> to improve the <term> domain-specific
</term> has been investigated systematically on two different <term> language pairs </term> . The
dialogue </term> . We extend prior work in two ways . We first apply approaches that have
predicting <term> subtopic boundaries </term> are two distinct tasks : ( 1 ) for predicting <term>
hide detail