tech,5-1-H05-1115,ak literature . We consider the problem of <term> question-focused sentence retrieval </term> from complex news articles describing
other,0-2-H05-1115,ak multi-event stories published over time . <term> Annotators </term> generated a list of <term> questions
other,5-2-H05-1115,ak Annotators </term> generated a list of <term> questions </term> central to understanding each story
lr,13-2-H05-1115,ak to understanding each story in our <term> corpus </term> . Because of the dynamic nature of
other,10-3-H05-1115,ak dynamic nature of the stories , many <term> questions </term> are time-sensitive ( e.g . How many
other,2-4-H05-1115,ak victims have been found ? ) Judges found <term> sentences </term> providing an <term> answer </term> to
other,5-4-H05-1115,ak <term> sentences </term> providing an <term> answer </term> to each <term> question </term> . To
other,8-4-H05-1115,ak providing an <term> answer </term> to each <term> question </term> . To address the <term> sentence retrieval
other,3-5-H05-1115,ak <term> question </term> . To address the <term> sentence retrieval problem </term> , we apply a <term> stochastic , graph-based
tech,10-5-H05-1115,ak retrieval problem </term> , we apply a <term> stochastic , graph-based method </term> for comparing the <term> relative importance
measure(ment),17-5-H05-1115,ak graph-based method </term> for comparing the <term> relative importance </term> of the <term> textual units </term> ,
other,21-5-H05-1115,ak <term> relative importance </term> of the <term> textual units </term> , which was previously used successfully
tech,30-5-H05-1115,ak was previously used successfully for <term> generic summarization </term> . Currently , we present a topic-sensitive
measure(ment),18-6-H05-1115,ak that it can outperform a competitive <term> baseline </term> , which compares the <term> similarity
measure(ment),23-6-H05-1115,ak baseline </term> , which compares the <term> similarity </term> of each <term> sentence </term> to the
other,26-6-H05-1115,ak the <term> similarity </term> of each <term> sentence </term> to the <term> input question </term>
other,29-6-H05-1115,ak of each <term> sentence </term> to the <term> input question </term> via <term> IDF-weighted word overlap
measure(ment),32-6-H05-1115,ak the <term> input question </term> via <term> IDF-weighted word overlap </term> . In our experiments , the method
measure(ment),8-7-H05-1115,ak experiments , the method achieves a <term> TRDR score </term> that is significantly higher than
measure(ment),18-7-H05-1115,ak significantly higher than that of the <term> baseline </term> . Following recent developments in
hide detail