this basic idea are being discussed and/or evaluated : Similar to activities one can define <term>
</term> . In this paper We experimentally evaluate a <term> trainable sentence planner </term>
perform an exhaustive comparison , we also evaluate a <term> hand-crafted template-based generation
classification accuracy </term> of the method is evaluated on three different <term> spoken language
and/or answer levels </term> . Experiments evaluating the effectiveness of our <term> answer resolution
decoding algorithm </term> that enables us to evaluate and compare several , previously proposed
evaluation methods </term> for automatically evaluating the <term> summarization quality </term> of
intervals in the <term> French sentence </term> . We evaluate the utility of this <term> constraint </term>
and <term> sentence alignment tasks </term> to evaluate its performance as a <term> similarity measure
most promising <term> features </term> . We evaluate the system on twenty <term> Switchboard dialogues
supervised learning </term> . In this paper , we evaluate an approach to automatically acquire <term>
issue of <term> domain dependence </term> in evaluating <term> WSD programs </term> . We describe the
paradigm </term> . We report experiences and evaluate the <term> annotated data </term> from the
called <term> POURPRE </term> , for automatically evaluating answers to <term> definition questions </term>
increasingly common speculative claim , by evaluating a representative <term> Chinese-to-English
Much effort has been put in designing and evaluating dedicated <term> word sense disambiguation
</term> of <term> SMT models </term> has never been evaluated and compared with that of the dedicated
producing promising results , which we have evaluated according to <term> cost-efficiency </term>
be inferred . A <term> paraphrase </term> is evaluated for whether its <term> sentences </term> are
non-parallel newspaper corpora </term> . We evaluate the qualityof the extracted data by showing
hide detail