trainable sentence planner </term> performs better than the <term> rule-based systems </term> and the
that the system yields higher performance than a <term> baseline </term> on all three aspects
SMT models </term> to be significantly lower than that of all the dedicated <term> WSD models
the proposed approach is more describable than other approaches such as those employing
theoretical accounts actually have worse coverage than accounts based on processing . Finally
independently trained models </term> rather than the usual pooling of all the <term> speech
<term> edges </term> adjacent to it , rather than all such <term> edges </term> as in conventional
. Our <term> algorithm </term> reported more than 99 % <term> accuracy </term> in both <term> language
</term> with <term> in-degree </term> greater than one and <term> out-degree </term> greater than
a more effective <term> CFG filter </term> than that of <term> LTAG </term> . We also investigate
other,18-1-P01-1009,bq markers </term> , which includes <term> other ( than ) </term> , <term> such ( as ) </term> , and <term>
becomes a crucial issue recently . Rather than using <term> length-based or translation-based
has been considered to be more complicated than <term> analysis </term> and <term> generation
rating </term> on average is only 5 % worse than the <term> top human-ranked sentence plan
non-native language essays </term> in less than 100 <term> words </term> . Even more illuminating
<term> kanji-kana characters </term> is greater than that of <term> erroneous chains </term> . From
document descriptors ( keywords ) </term> than single <term> words </term> are . This leads
achieved by using <term> semantic </term> rather than <term> syntactic categories </term> on the <term>
significantly better <term> translation quality </term> than the <term> statistical machine translation
simpler set of <term> model parameters </term> than similar <term> phrase-based models </term>
hide detail