lr,8-6-N01-1003,bq |
rules
</term>
automatically learned from
<term>
|
training data
|
</term>
. We show that the trained
<term>
SPR
|
#1430
The SPR uses ranking rules automatically learned fromtraining data. |
measure(ment),13-7-N01-1003,bq |
select a
<term>
sentence plan
</term>
whose
<term>
|
rating
|
</term>
on average is only 5 % worse than
|
#1446
We show that the trained SPR learns to select a sentence plan whoserating on average is only 5% worse than the top human-ranked sentence plan. |
model,3-6-N01-1003,bq |
plan
</term>
. The
<term>
SPR
</term>
uses
<term>
|
ranking rules
|
</term>
automatically learned from
<term>
training
|
#1425
The SPR usesranking rules automatically learned from training data. |
other,10-7-N01-1003,bq |
<term>
SPR
</term>
learns to select a
<term>
|
sentence plan
|
</term>
whose
<term>
rating
</term>
on average
|
#1443
We show that the trained SPR learns to select asentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
other,12-5-N01-1003,bq |
SPR )
</term>
ranks the list of output
<term>
|
sentence plans
|
</term>
, and then selects the top-ranked
|
#1412
Second, the sentence-plan-ranker (SPR) ranks the list of outputsentence plans, and then selects the top-ranked plan. |
other,18-4-N01-1003,bq |
potentially large list of possible
<term>
|
sentence plans
|
</term>
for a given
<term>
text-plan input
</term>
|
#1392
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possiblesentence plans for a given text-plan input. |
other,20-5-N01-1003,bq |
</term>
, and then selects the top-ranked
<term>
|
plan
|
</term>
. The
<term>
SPR
</term>
uses
<term>
ranking
|
#1420
Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-rankedplan. |
other,22-1-N01-1003,bq |
scoping
</term>
, i.e. the choice of
<term>
|
syntactic structure
|
</term>
for elementary
<term>
speech acts
</term>
|
#1315
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice ofsyntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
other,23-4-N01-1003,bq |
<term>
sentence plans
</term>
for a given
<term>
|
text-plan input
|
</term>
. Second , the
<term>
sentence-plan-ranker
|
#1397
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possible sentence plans for a giventext-plan input. |
other,23-7-N01-1003,bq |
average is only 5 % worse than the
<term>
|
top human-ranked sentence plan
|
</term>
. In this paper , we compare the
|
#1456
We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than thetop human-ranked sentence plan. |
other,24-2-N01-1003,bq |
training
<term>
SPoT
</term>
on the basis of
<term>
|
feedback
|
</term>
provided by
<term>
human judges
</term>
|
#1359
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis offeedback provided by human judges. |
other,26-1-N01-1003,bq |
syntactic structure
</term>
for elementary
<term>
|
speech acts
|
</term>
and the decision of how to combine
|
#1319
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementaryspeech acts and the decision of how to combine them into one or more sentences. |
other,27-2-N01-1003,bq |
of
<term>
feedback
</term>
provided by
<term>
|
human judges
|
</term>
. We reconceptualize the task into
|
#1362
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided byhuman judges. |
other,40-1-N01-1003,bq |
how to combine them into one or more
<term>
|
sentences
|
</term>
. In this paper , we present
<term>
|
#1333
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or moresentences. |
tech,0-1-N01-1003,bq |
</term>
and
<term>
key prediction
</term>
.
<term>
|
Sentence planning
|
</term>
is a set of inter-related but distinct
|
#1293
Our algorithm reported more than 99% accuracy in both language identification and key prediction.Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
tech,1-6-N01-1003,bq |
the top-ranked
<term>
plan
</term>
. The
<term>
|
SPR
|
</term>
uses
<term>
ranking rules
</term>
automatically
|
#1423
TheSPR uses ranking rules automatically learned from training data. |
tech,15-1-N01-1003,bq |
but distinct tasks , one of which is
<term>
|
sentence scoping
|
</term>
, i.e. the choice of
<term>
syntactic
|
#1308
Sentence planning is a set of inter-related but distinct tasks, one of which issentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
tech,3-5-N01-1003,bq |
text-plan input
</term>
. Second , the
<term>
|
sentence-plan-ranker ( SPR )
|
</term>
ranks the list of output
<term>
sentence
|
#1403
Second, thesentence-plan-ranker ( SPR ) ranks the list of output sentence plans, and then selects the top-ranked plan. |
tech,5-7-N01-1003,bq |
data
</term>
. We show that the trained
<term>
|
SPR
|
</term>
learns to select a
<term>
sentence
|
#1438
We show that the trainedSPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
tech,6-4-N01-1003,bq |
distinct phases . First , a very simple ,
<term>
|
randomized sentence-plan-generator ( SPG )
|
</term>
generates a potentially large list
|
#1380
First, a very simple,randomized sentence-plan-generator ( SPG ) generates a potentially large list of possible sentence plans for a given text-plan input. |