tech,4-2-H01-1070,ak paper also proposes <term> rule-reduction algorithm </term> applying <term> mutual information </term>
tech,1-3-H01-1070,ak error-correction rules </term> . Our <term> algorithm </term> reported more than 99 % <term> accuracy
tech,3-2-P01-1008,ak We present an <term> unsupervised learning algorithm </term> for <term> identification of paraphrases
tech,31-2-P01-1047,ak sensitive logic </term> , and a <term> learning algorithm </term> from <term> structured data </term> (
tech,3-3-N03-1004,ak present our <term> multi-level answer resolution algorithm </term> that combines results from the <term>
tech,6-4-N03-1004,ak effectiveness of our <term> answer resolution algorithm </term> show a 35.0 % <term> relative improvement
tech,8-1-N03-1017,ak translation model </term> and <term> decoding algorithm </term> that enables us to evaluate and compare
tech,10-3-N03-2017,ak <term> constraint </term> in two different <term> algorithms </term> . The results show that it can provide
tech,17-2-P03-1051,ak uses it to bootstrap an <term> unsupervised algorithm </term> to build the <term> Arabic word segmenter
tech,1-3-P03-1051,ak unsegmented Arabic corpus </term> . The <term> algorithm </term> uses a <term> trigram language model
tech,9-5-P03-1051,ak accuracy </term> , we use an <term> unsupervised algorithm </term> for automatically acquiring new <term>
tech,9-7-P03-1051,ak state-of-the-art performance and the <term> algorithm </term> can be used for many <term> highly
tech,4-1-H05-1012,ak presents a <term> maximum entropy word alignment algorithm </term> for <term> Arabic-English </term> based
tech,3-5-H05-1012,ak translation tests </term> . Performance of the <term> algorithm </term> is contrasted with <term> human annotation
tech,20-3-H05-1101,ak lower-bound </term> for certain classes of <term> algorithms </term> that are currently used in the literature
tech,6-9-J05-1003,ak The article also introduces a new <term> algorithm </term> for the <term> boosting approach </term>
tech,8-10-J05-1003,ak significant efficiency gains for the new <term> algorithm </term> over the obvious implementation of
tech,3-6-P05-1067,ak introduce a <term> polynomial time decoding algorithm </term> for the <term> model </term> . We evaluate
tech,1-4-P05-1069,ak bigram features </term> . Our <term> training algorithm </term> can easily handle millions of <term>
tech,8-2-E06-1004,ak the last decade , a variety of <term> SMT algorithms </term> have been built and empirically tested
hide detail