#5768We consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time.Annotators generated a list of questions central to understanding each story in our corpus.
other,5-4-H05-1115,ak
<term>
sentences
</term>
providing an
<term>
answer
</term>
to each
<term>
question
</term>
. To
#5812Judges found sentences providing ananswer to each question.
measure(ment),18-6-H05-1115,ak
that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
#5868Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitivebaseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap.
measure(ment),18-7-H05-1115,ak
significantly higher than that of the
<term>
baseline
</term>
. Following recent developments in
#5904In our experiments, the method achieves a TRDR score that is significantly higher than that of thebaseline.
lr,13-2-H05-1115,ak
to understanding each story in our
<term>
corpus
</term>
. Because of the dynamic nature of
#5781Annotators generated a list of questions central to understanding each story in ourcorpus.
tech,30-5-H05-1115,ak
was previously used successfully for
<term>
generic summarization
</term>
. Currently , we present a topic-sensitive
#5847To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully forgeneric summarization.
measure(ment),32-6-H05-1115,ak
the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
. In our experiments , the method
#5882Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question viaIDF-weighted word overlap.
other,29-6-H05-1115,ak
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
#5879Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to theinput question via IDF-weighted word overlap.
other,8-4-H05-1115,ak
providing an
<term>
answer
</term>
to each
<term>
question
</term>
. To address the
<term>
sentence retrieval
#5815Judges found sentences providing an answer to eachquestion.
tech,5-1-H05-1115,ak
literature . We consider the problem of
<term>
question-focused sentence retrieval
</term>
from complex news articles describing
#5754We consider the problem ofquestion-focused sentence retrieval from complex news articles describing multi-event stories published over time.
other,5-2-H05-1115,ak
Annotators
</term>
generated a list of
<term>
questions
</term>
central to understanding each story
#5773Annotators generated a list ofquestions central to understanding each story in our corpus.
other,10-3-H05-1115,ak
dynamic nature of the stories , many
<term>
questions
</term>
are time-sensitive ( e.g . How many
#5793Because of the dynamic nature of the stories, manyquestions are time-sensitive (e.g. How many victims have been found?)
measure(ment),17-5-H05-1115,ak
graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
,
#5834To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing therelative importance of the textual units, which was previously used successfully for generic summarization.
other,26-6-H05-1115,ak
the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
#5876Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of eachsentence to the input question via IDF-weighted word overlap.
other,3-5-H05-1115,ak
<term>
question
</term>
. To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based
#5820To address thesentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
other,2-4-H05-1115,ak
victims have been found ? ) Judges found
<term>
sentences
</term>
providing an
<term>
answer
</term>
to
#5809Judges foundsentences providing an answer to each question.
measure(ment),23-6-H05-1115,ak
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
#5873Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares thesimilarity of each sentence to the input question via IDF-weighted word overlap.
tech,10-5-H05-1115,ak
retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
#5827To address the sentence retrieval problem, we apply astochastic , graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
other,21-5-H05-1115,ak
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully
#5838To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of thetextual units, which was previously used successfully for generic summarization.
measure(ment),8-7-H05-1115,ak
experiments , the method achieves a
<term>
TRDR score
</term>
that is significantly higher than
#5894In our experiments, the method achieves aTRDR score that is significantly higher than that of the baseline.