We consider the problem of
<term>
question-focused sentence retrieval
</term>
from complex news articles describing multi-event stories published over time .
#5754We consider the problem ofquestion-focused sentence retrieval from complex news articles describing multi-event stories published over time.
other,0-2-H05-1115,ak
We consider the problem of
<term>
question-focused sentence retrieval
</term>
from complex news articles describing multi-event stories published over time .
<term>
Annotators
</term>
generated a list of
<term>
questions
</term>
central to understanding each story in our
<term>
corpus
</term>
.
#5768We consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time.Annotators generated a list of questions central to understanding each story in our corpus.
other,5-2-H05-1115,ak
<term>
Annotators
</term>
generated a list of
<term>
questions
</term>
central to understanding each story in our
<term>
corpus
</term>
.
#5773Annotators generated a list ofquestions central to understanding each story in our corpus.
lr,13-2-H05-1115,ak
<term>
Annotators
</term>
generated a list of
<term>
questions
</term>
central to understanding each story in our
<term>
corpus
</term>
.
#5781Annotators generated a list of questions central to understanding each story in ourcorpus.
other,10-3-H05-1115,ak
Because of the dynamic nature of the stories , many
<term>
questions
</term>
are time-sensitive ( e.g . How many victims have been found ? )
#5793Because of the dynamic nature of the stories, manyquestions are time-sensitive (e.g. How many victims have been found?)
other,2-4-H05-1115,ak
Judges found
<term>
sentences
</term>
providing an
<term>
answer
</term>
to each
<term>
question
</term>
.
#5809Judges foundsentences providing an answer to each question.
other,5-4-H05-1115,ak
Judges found
<term>
sentences
</term>
providing an
<term>
answer
</term>
to each
<term>
question
</term>
.
#5812Judges found sentences providing ananswer to each question.
other,8-4-H05-1115,ak
Judges found
<term>
sentences
</term>
providing an
<term>
answer
</term>
to each
<term>
question
</term>
.
#5815Judges found sentences providing an answer to eachquestion.
other,3-5-H05-1115,ak
To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully for
<term>
generic summarization
</term>
.
#5820To address thesentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
tech,10-5-H05-1115,ak
To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully for
<term>
generic summarization
</term>
.
#5827To address the sentence retrieval problem, we apply astochastic , graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
measure(ment),17-5-H05-1115,ak
To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully for
<term>
generic summarization
</term>
.
#5834To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing therelative importance of the textual units, which was previously used successfully for generic summarization.
other,21-5-H05-1115,ak
To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully for
<term>
generic summarization
</term>
.
#5838To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of thetextual units, which was previously used successfully for generic summarization.
tech,30-5-H05-1115,ak
To address the
<term>
sentence retrieval problem
</term>
, we apply a
<term>
stochastic , graph-based method
</term>
for comparing the
<term>
relative importance
</term>
of the
<term>
textual units
</term>
, which was previously used successfully for
<term>
generic summarization
</term>
.
#5847To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully forgeneric summarization.
measure(ment),18-6-H05-1115,ak
Currently , we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
.
#5868Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitivebaseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap.
measure(ment),23-6-H05-1115,ak
Currently , we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
.
#5873Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares thesimilarity of each sentence to the input question via IDF-weighted word overlap.
other,26-6-H05-1115,ak
Currently , we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
.
#5876Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of eachsentence to the input question via IDF-weighted word overlap.
other,29-6-H05-1115,ak
Currently , we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
.
#5879Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to theinput question via IDF-weighted word overlap.
measure(ment),32-6-H05-1115,ak
Currently , we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive
<term>
baseline
</term>
, which compares the
<term>
similarity
</term>
of each
<term>
sentence
</term>
to the
<term>
input question
</term>
via
<term>
IDF-weighted word overlap
</term>
.
#5882Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question viaIDF-weighted word overlap.
measure(ment),8-7-H05-1115,ak
In our experiments , the method achieves a
<term>
TRDR score
</term>
that is significantly higher than that of the
<term>
baseline
</term>
.
#5894In our experiments, the method achieves aTRDR score that is significantly higher than that of the baseline.
measure(ment),18-7-H05-1115,ak
In our experiments , the method achieves a
<term>
TRDR score
</term>
that is significantly higher than that of the
<term>
baseline
</term>
.
#5904In our experiments, the method achieves a TRDR score that is significantly higher than that of thebaseline.