tech,15-1-N01-1003,bq but distinct tasks , one of which is <term> sentence scoping </term> , i.e. the choice of <term>
tech,9-2-N01-1003,bq paper , we present <term> SPoT </term> , a <term> sentence planner </term> , and a new methodology for
other,18-4-N01-1003,bq potentially large list of possible <term> sentence plans </term> for a given <term> text-plan
other,12-5-N01-1003,bq SPR ) </term> ranks the list of output <term> sentence plans </term> , and then selects the top-ranked
other,10-7-N01-1003,bq <term> SPR </term> learns to select a <term> sentence plan </term> whose <term> rating </term> on average
other,23-7-N01-1003,bq 5 % worse than the <term> top human-ranked sentence plan </term> . In this paper , we compare
tech,7-2-P01-1056,bq experimentally evaluate a <term> trainable sentence planner </term> for a <term> spoken dialogue
tech,18-3-P01-1056,bq generation component </term> , two <term> rule-based sentence planners </term> , and two <term> baseline
tech,24-3-P01-1056,bq sentence planners </term> , and two <term> baseline sentence planners </term> . We show that the <term>
tech,4-4-P01-1056,bq </term> . We show that the <term> trainable sentence planner </term> performs better than the <term>
other,21-1-N03-1026,bq Grammars ( LFG ) </term> to the domain of <term> sentence condensation </term> . Our <term> system </term>
tech,18-3-N03-1026,bq <term> summarization </term> quality of <term> sentence condensation systems </term> . An <term> experimental
other,13-2-N03-2017,bq non-overlapping intervals in the <term> French sentence </term> . We evaluate the utility of this
tech,9-3-P03-1005,bq <term> question classification </term> and <term> sentence alignment tasks </term> to evaluate its performance
tech,1-1-C04-1128,bq our approach is successful . While <term> sentence extraction </term> as an approach to <term>
tech,38-1-C04-1128,bq relation to one made previously , <term> sentence extraction </term> may not capture the necessary
other,28-3-I05-5003,bq matches and non-matches </term> in the <term> sentence </term> . Our results show that <term> MT evaluation
other,12-2-J05-1003,bq candidate parses </term> for each input <term> sentence </term> , with associated <term> probabilities
other,14-3-P05-1034,bq dependency parse </term> onto the target <term> sentence </term> , extract <term> dependency treelet
tech,13-1-P05-3025,bq the process </term> of <term> translating a sentence </term> . The <term> method </term> allows a <term>
hide detail