other,18-1-P01-1009,bq markers </term> , which includes <term> other ( than ) </term> , <term> such ( as ) </term> , and <term>
document descriptors ( keywords ) </term> than single <term> words </term> are . This leads
trainable sentence planner </term> performs better than the <term> rule-based systems </term> and the
has been considered to be more complicated than <term> analysis </term> and <term> generation
theoretical accounts actually have worse coverage than accounts based on processing . Finally
the proposed approach is more describable than other approaches such as those employing
a more effective <term> CFG filter </term> than that of <term> LTAG </term> . We also investigate
</term> with <term> in-degree </term> greater than one and <term> out-degree </term> greater than
<term> kanji-kana characters </term> is greater than that of <term> erroneous chains </term> . From
than one and <term> out-degree </term> greater than one . We create a <term> word-trie </term>
non-native language essays </term> in less than 100 <term> words </term> . Even more illuminating
Surprisingly , learning <term> phrases </term> longer than three <term> words </term> and learning <term>
SMT models </term> to be significantly lower than that of all the dedicated <term> WSD models
simultaneously using less <term> memory </term> than is required by current <term> decoder </term>
. Our <term> algorithm </term> reported more than 99 % <term> accuracy </term> in both <term> language
simpler set of <term> model parameters </term> than similar <term> phrase-based models </term>
that the system yields higher performance than a <term> baseline </term> on all three aspects
significantly better <term> translation quality </term> than the <term> statistical machine translation
<term> edges </term> adjacent to it , rather than all such <term> edges </term> as in conventional
becomes a crucial issue recently . Rather than using <term> length-based or translation-based
hide detail