</term>
and
<term>
key prediction
</term>
.
<term>
Sentence
planning
</term>
is a set of inter-related
#1293Our algorithm reported more than 99% accuracy in both language identification and key prediction.Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences.
tech,15-1-N01-1003,ak
but distinct tasks , one of which is
<term>
sentence
scoping
</term>
, i.e. the choice of
<term>
#1308Sentence planning is a set of inter-related but distinct tasks, one of which issentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences.
other,40-1-N01-1003,ak
how to combine them into one or more
<term>
sentences
</term>
. In this paper , we present
<term>
#1333Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or moresentences.
tech,9-2-N01-1003,ak
paper , we present
<term>
SPoT
</term>
, a
<term>
sentence
planner
</term>
, and a new methodology for
#1344In this paper, we present SPoT, asentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges.
other,18-4-N01-1003,ak
potentially large list of possible
<term>
sentence
plans
</term>
for a given
<term>
text-plan
#1392First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possiblesentence plans for a given text-plan input.
other,11-5-N01-1003,ak
SPR )
</term>
ranks the list of
<term>
output
sentence
plans
</term>
, and then selects the
<term>
#1412Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-ranked plan.
other,10-7-N01-1003,ak
<term>
SPR
</term>
learns to select a
<term>
sentence
plan
</term>
whose
<term>
rating
</term>
on average
#1443We show that the trained SPR learns to select asentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan.
other,23-7-N01-1003,ak
5 % worse than the
<term>
top human-ranked
sentence
plan
</term>
. In this paper , we compare
#1458We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan.
tech,7-2-P01-1056,ak
experimentally evaluate a
<term>
trainable
sentence
planner
</term>
for a
<term>
spoken dialogue
#2059In this paper We experimentally evaluate a trainable sentence planner for a spoken dialogue system by eliciting subjective human judgments.
tech,18-3-P01-1056,ak
generation component
</term>
, two
<term>
rule-based
sentence
planners
</term>
, and two
<term>
baseline
#2091In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners.
tech,24-3-P01-1056,ak
sentence planners
</term>
, and two
<term>
baseline
sentence
planners
</term>
. We show that the
<term>
#2097In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners.
tech,4-4-P01-1056,ak
</term>
. We show that the
<term>
trainable
sentence
planner
</term>
performs better than the
<term>
#2105We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system.
tech,21-1-N03-1026,ak
Grammars ( LFG )
</term>
to the domain of
<term>
sentence
condensation
</term>
. Our system incorporates
#2806We present an application of ambiguity packing and stochastic disambiguation techniques for Lexical-Functional Grammars (LFG) to the domain ofsentence condensation.
tech,18-3-N03-1026,ak
<term>
summarization quality
</term>
of
<term>
sentence
condensation systems
</term>
. An
<term>
experimental
#2857Furthermore, we propose the use of standard parser evaluation methods for automatically evaluating the summarization quality ofsentence condensation systems.
other,13-2-N03-2017,ak
non-overlapping intervals in the
<term>
French
sentence
</term>
. We evaluate the utility of this
#3258It requires disjoint English phrases to be mapped to non-overlapping intervals in the French sentence.
tech,9-3-P03-1005,ak
<term>
question classification
</term>
and
<term>
sentence
alignment tasks
</term>
to evaluate its performance
#3849We applied the proposed method to question classification andsentence alignment tasks to evaluate its performance as a similarity measure and a kernel function.
lr,17-2-P03-1050,ak
English stemmer
</term>
and a
<term>
small ( 10K
sentences
) parallel corpus
</term>
as its sole
<term>
#4468The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources.
tech,5-1-H05-1115,ak
consider the problem of
<term>
question-focused
sentence
retrieval
</term>
from complex news articles
#5755We consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time.
other,2-4-H05-1115,ak
victims have been found ? ) Judges found
<term>
sentences
</term>
providing an
<term>
answer
</term>
to
#5809Judges foundsentences providing an answer to each question.
other,3-5-H05-1115,ak
<term>
question
</term>
. To address the
<term>
sentence
retrieval problem
</term>
, we apply a
<term>
#5820To address thesentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.