tech,0-1-N01-1003,ak </term> and <term> key prediction </term> . <term> Sentence planning </term> is a set of inter-related
tech,15-1-N01-1003,ak but distinct tasks , one of which is <term> sentence scoping </term> , i.e. the choice of <term>
other,40-1-N01-1003,ak how to combine them into one or more <term> sentences </term> . In this paper , we present <term>
tech,9-2-N01-1003,ak paper , we present <term> SPoT </term> , a <term> sentence planner </term> , and a new methodology for
other,18-4-N01-1003,ak potentially large list of possible <term> sentence plans </term> for a given <term> text-plan
other,11-5-N01-1003,ak SPR ) </term> ranks the list of <term> output sentence plans </term> , and then selects the <term>
other,10-7-N01-1003,ak <term> SPR </term> learns to select a <term> sentence plan </term> whose <term> rating </term> on average
other,23-7-N01-1003,ak 5 % worse than the <term> top human-ranked sentence plan </term> . In this paper , we compare
tech,7-2-P01-1056,ak experimentally evaluate a <term> trainable sentence planner </term> for a <term> spoken dialogue
tech,18-3-P01-1056,ak generation component </term> , two <term> rule-based sentence planners </term> , and two <term> baseline
tech,24-3-P01-1056,ak sentence planners </term> , and two <term> baseline sentence planners </term> . We show that the <term>
tech,4-4-P01-1056,ak </term> . We show that the <term> trainable sentence planner </term> performs better than the <term>
tech,21-1-N03-1026,ak Grammars ( LFG ) </term> to the domain of <term> sentence condensation </term> . Our system incorporates
tech,18-3-N03-1026,ak <term> summarization quality </term> of <term> sentence condensation systems </term> . An <term> experimental
other,13-2-N03-2017,ak non-overlapping intervals in the <term> French sentence </term> . We evaluate the utility of this
tech,9-3-P03-1005,ak <term> question classification </term> and <term> sentence alignment tasks </term> to evaluate its performance
lr,17-2-P03-1050,ak English stemmer </term> and a <term> small ( 10K sentences ) parallel corpus </term> as its sole <term>
tech,5-1-H05-1115,ak consider the problem of <term> question-focused sentence retrieval </term> from complex news articles
other,2-4-H05-1115,ak victims have been found ? ) Judges found <term> sentences </term> providing an <term> answer </term> to
other,3-5-H05-1115,ak <term> question </term> . To address the <term> sentence retrieval problem </term> , we apply a <term>
hide detail