other,18-1-P01-1009,bq markers </term> , which includes <term> other ( than ) </term> , <term> such ( as ) </term> , and <term>
non-native language essays </term> in less than 100 <term> words </term> . Even more illuminating
. Our <term> algorithm </term> reported more than 99 % <term> accuracy </term> in both <term> language
that the system yields higher performance than a <term> baseline </term> on all three aspects
theoretical accounts actually have worse coverage than accounts based on processing . Finally
<term> edges </term> adjacent to it , rather than all such <term> edges </term> as in conventional
has been considered to be more complicated than <term> analysis </term> and <term> generation
illustrate a framework less restrictive than earlier ones by allowing a <term> speaker
from individual <term> phrases </term> rather than from the <term> weighted sum </term> of a <term>
simultaneously using less <term> memory </term> than is required by current <term> decoder </term>
than one and <term> out-degree </term> greater than one . We create a <term> word-trie </term>
</term> with <term> in-degree </term> greater than one and <term> out-degree </term> greater than
the proposed approach is more describable than other approaches such as those employing
simpler set of <term> model parameters </term> than similar <term> phrase-based models </term>
document descriptors ( keywords ) </term> than single <term> words </term> are . This leads
achieved by using <term> semantic </term> rather than <term> syntactic categories </term> on the <term>
SMT models </term> to be significantly lower than that of all the dedicated <term> WSD models
<term> kanji-kana characters </term> is greater than that of <term> erroneous chains </term> . From
a more effective <term> CFG filter </term> than that of <term> LTAG </term> . We also investigate
trainable sentence planner </term> performs better than the <term> rule-based systems </term> and the
hide detail