adaptive to <term> individual users </term> serve as good guidance for <term> novice users </term>
sentences ) <term> parallel corpus </term> as its sole <term> training resources </term>
semantically annotated corpus </term> resource as reliable basis for the large-scale <term>
a corpus-based sample and formulate them as <term> probabilistic Horn clauses </term> .
by noone ( e.g. , I walked is to to walk as I laughed is to to laugh , noted I walked
be valid on the level of <term> form </term> as well as on the level of <term> meaning </term>
on the level of <term> form </term> as well as on the level of <term> meaning </term> , we
aggregation system </term> using each author 's text as a coherent <term> corpus </term> . Our approach
. While <term> sentence extraction </term> as an approach to <term> summarization </term>
<term> multilingually aligned wordnets </term> as <term> BalkaNet </term> and <term> EuroWordNet
<term> unstructured data sources </term> , such as the <term> Web </term> or <term> newswire documents
Processing ( NLP ) </term> applications , such as <term> Word Sense Disambiguation ( WSD ) </term>
word alignment techniques </term> is shown as well as improvement on several <term> machine
alignment techniques </term> is shown as well as improvement on several <term> machine translation
presented that deals such <term> phrases </term> , as well as a <term> training method </term> based
deals such <term> phrases </term> , as well as a <term> training method </term> based on the
maximization of <term> translation accuracy </term> , as measured with the <term> NIST evaluation
</term> . These <term> models </term> can be viewed as pairs of <term> probabilistic context-free
Translation ( MT ) systems </term> , such as <term> BLEU </term> or <term> NIST </term> , are
a number of years and is currently used as the basis of <term> CMU 's SMT system </term>
hide detail