</term> for this purpose . In this paper we | show | how two standard outputs from <term> information | #278 In this paper we show how two standard outputs from information extraction (IE) systems - named entity annotations and scenario templates - can be used to enhance access to text collections via a standard text browser. | ||
context of <term> dialog systems </term> . We | show | how research in <term> generation </term> can | #996 We show how research in generation can be adapted to dialog systems, and how the high cost of hand-crafting knowledge-based generation systems can be overcome by employing machine learning techniques. | ||
provide experimental results that clearly | show | the need for a <term> dynamic language model | #1137 We provide experimental results that clearly show the need for a dynamic language model combination to improve the performance further. | ||
learned from <term> training data </term> . We | show | that the trained <term> SPR </term> learns | #1434 We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. | ||
on <term> queries </term> containing them . I | show | that the <term> performance </term> of a <term> | #1873 I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine's operational semantics. | ||
<term> baseline sentence planners </term> . We | show | that the <term> trainable sentence planner | #2101 We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. | ||
<term> answer resolution algorithm </term> | show | a 35.0 % <term> relative improvement </term> | #2406 Experiments evaluating the effectiveness of our answer resolution algorithmshow a 35.0% relative improvement over our baseline system in the number of questions correctly answered, and a 32.8% improvement according to the average precision metric. | ||
speech </term> are limited . In this paper , we | show | how <term> training data </term> can be supplemented | #3034 In this paper, we show how training data can be supplemented with text from the web filtered to match the style and/or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams. | ||
different <term> algorithms </term> . The results | show | that it can provide a significant improvement | #3274 The results show that it can provide a significant improvement in alignment quality. | ||
underlying <term> word alignment </term> . We | show | experimental results on <term> block selection | #3463 We show experimental results on block selection criteria based on unigram counts and phrase length. | ||
twenty <term> Switchboard dialogues </term> and | show | that it compares well to Byron 's ( 2002 | #4028 We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron's (2002) manually tuned system. | ||
English-Chinese translation relations </term> . We | show | that this model of <term> parallel wordnet | #7153 We show that this model of parallel wordnet building is effective and achieves higher precision in LSR prediction. | ||
in the <term> sentence </term> . Our results | show | that <term> MT evaluation techniques </term> | #7449 Our results show that MT evaluation techniques are able to produce useful features for paraphrase classification and to a lesser extent entailment. | ||
the <term> parsing data </term> . Experiments | show | significant efficiency gains for the new | #8253 Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. | ||
statisticalmachine translation system </term> . We also | show | that a good-quality <term> MT system </term> | #8431 We also show that a good-quality MT system can be built fromscratch by starting with a very small parallel corpus (100,000 words) and exploiting a largenon-parallel corpus. | ||
<term> ranking learning problem </term> and | show | that the proposed <term> discourse representation | #8637 We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function. | ||
suffix array-based data structure </term> . We | show | how sampling can be used to reduce the <term> | #8817 We show how sampling can be used to reduce the retrieval time by orders of magnitude with no loss in translation quality. | ||
describe an efficient <term> decoder </term> and | show | that using these <term> tree-based models | #8917 We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser. | ||
as <term> features </term> . Our experiments | show | that <term> log-linear models </term> significantly | #9673 Our experiments show that log-linear models significantly outperform IBM translation models. | ||
word alignment </term> . Experimental results | show | that our approach improves <term> domain-specific | #9756 Experimental results show that our approach improves domain-specific word alignment in terms of both precision and recall, achieving a relative error rate reduction of 6.56% as compared with the state-of-the-art technologies. |