tech,0-1-N01-1003,bq </term> and <term> key prediction </term> . <term> Sentence planning </term> is a set of inter-related
tech,15-1-N01-1003,bq but distinct tasks , one of which is <term> sentence scoping </term> , i.e. the choice of <term>
other,40-1-N01-1003,bq how to combine them into one or more <term> sentences </term> . In this paper , we present <term>
tech,9-2-N01-1003,bq paper , we present <term> SPoT </term> , a <term> sentence planner </term> , and a new methodology for
other,18-4-N01-1003,bq potentially large list of possible <term> sentence plans </term> for a given <term> text-plan
other,12-5-N01-1003,bq SPR ) </term> ranks the list of output <term> sentence plans </term> , and then selects the top-ranked
other,10-7-N01-1003,bq <term> SPR </term> learns to select a <term> sentence plan </term> whose <term> rating </term> on average
other,23-7-N01-1003,bq 5 % worse than the <term> top human-ranked sentence plan </term> . In this paper , we compare
tech,7-2-P01-1056,bq experimentally evaluate a <term> trainable sentence planner </term> for a <term> spoken dialogue
tech,18-3-P01-1056,bq generation component </term> , two <term> rule-based sentence planners </term> , and two <term> baseline
tech,24-3-P01-1056,bq sentence planners </term> , and two <term> baseline sentence planners </term> . We show that the <term>
tech,4-4-P01-1056,bq </term> . We show that the <term> trainable sentence planner </term> performs better than the <term>
other,21-1-N03-1026,bq Grammars ( LFG ) </term> to the domain of <term> sentence condensation </term> . Our <term> system </term>
tech,18-3-N03-1026,bq <term> summarization </term> quality of <term> sentence condensation systems </term> . An <term> experimental
other,13-2-N03-2017,bq non-overlapping intervals in the <term> French sentence </term> . We evaluate the utility of this
tech,9-3-P03-1005,bq <term> question classification </term> and <term> sentence alignment tasks </term> to evaluate its performance
<term> English stemmer </term> and a small ( 10K sentences ) <term> parallel corpus </term> as its sole
other,9-2-C04-1106,bq quite dubious about <term> analogies between sentences </term> : they would not be enough numerous
other,16-3-C04-1106,bq of <term> analogies </term> among the <term> sentences </term> that it contains . We give two estimates
tech,1-1-C04-1128,bq our approach is successful . While <term> sentence extraction </term> as an approach to <term>
hide detail