<term> information retrieval techniques </term> use a <term> histogram </term> of <term> keywords
component performance </term> . We describe our use of this approach in numerous fielded user
natural language </term> , current systems use <term> manual or semi-automatic methods </term>
</term> . The <term> model </term> is designed for use in <term> error correction </term> , with a
selection </term> . Furthermore , we propose the use of standard <term> parser evaluation methods
the <term> system output </term> due to the use of a <term> constraint-based parser/generator
demonstrates the following ideas : ( i ) explicit use of both preceding and following <term> tag
network representation </term> , ( ii ) broad use of <term> lexical features </term> , including
multiple consecutive words , ( iii ) effective use of <term> priors </term> in <term> conditional
<term> small-sized bilingual corpus </term> , we use an <term> out-of-domain bilingual corpus </term>
</term> . During <term> decoding </term> , we use a <term> block unigram model </term> and a <term>
</term> . Unlike conventional methods that use <term> hand-crafted rules </term> , the proposed
the <term> segmentation accuracy </term> , we use an <term> unsupervised algorithm </term> for
for that difference . In this paper , we use the <term> information redundancy </term> in
</term> in the input documents . Further , the use of multiple <term> machine translation systems
<term> reranking approaches </term> . We make use of a <term> conditional log-linear model </term>
</term> are described briefly , as well as the use of <term> ILIMP </term> in a modular <term> syntactic
establishes the equivalence between the standard use of <term> BLEU </term> in <term> word n-grams
at the <term> character level </term> . The use of <term> BLEU </term> at the <term> character
high <term> accuracy </term> of the model , the use of <term> smoothing </term> in an <term> unlexicalized
hide detail