statistical machine translation ( MT ) </term> , to understand the <term> model </term> 's strengths
Phrasal Lexicon ( DHPL ) </term> [ Zernik88 ] , to facilitate <term> language acquisition </term>
confines of <term> syntax </term> , for instance , to the task of <term> semantic interpretation
<term> machine translation </term> , that is , to make decisions on the basis of <term> translation
</term> of <term> human language learners </term> , to the <term> output </term> of <term> machine translation
be able , after attending this workshop , to set out building an <term> SMT system </term>
noted I walked : to walk : : I laughed : to laugh ) . But <term> computational linguists
laughed is to to laugh , noted I walked : to walk : : I laughed : to laugh ) . But <term>
Lexical-Functional Grammars ( LFG ) </term> to the domain of <term> sentence condensation
methods ( BLEU , NIST , WER and PER ) </term> to building <term> classifiers </term> to predict
machine-readable dictionaries ( MRD 's ) </term> to create a <term> broad coverage lexicon </term>
Unification Categorial Grammar ( UCG ) </term> to the framework of <term> Isomorphic Grammars
disambiguation </term> is raised from 46.0 % to 60.62 % by using this novel approach . <term>
<term> parsing accuracy </term> rate from 60 % to 75 % , a 37 % reduction in error . We discuss
terms of both simplicity and efficiency — to work on <term> feature selection methods </term>
extraction tasks </term> because of its ability to capture arbitrary , overlapping <term> features
copying </term> combined with its ability to handle <term> cyclic structures </term> without
demonstrate the <term> model </term> 's ability to significantly reduce <term> character and
quasi-destructive scheme 's ability </term> to avoid <term> over copying </term> and <term>
</term> , were compared in terms of the ability to represent two kinds of <term> similarity </term>
hide detail