other,24-2-N01-1003,bq |
training
<term>
SPoT
</term>
on the basis of
<term>
|
feedback
|
</term>
provided by
<term>
human judges
</term>
|
#1359
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis offeedback provided by human judges. |
other,27-2-N01-1003,bq |
of
<term>
feedback
</term>
provided by
<term>
|
human judges
|
</term>
. We reconceptualize the task into
|
#1362
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided byhuman judges. |
other,20-5-N01-1003,bq |
</term>
, and then selects the top-ranked
<term>
|
plan
|
</term>
. The
<term>
SPR
</term>
uses
<term>
ranking
|
#1420
Second, the sentence-plan-ranker (SPR) ranks the list of output sentence plans, and then selects the top-rankedplan. |
tech,6-4-N01-1003,bq |
distinct phases . First , a very simple ,
<term>
|
randomized sentence-plan-generator ( SPG )
|
</term>
generates a potentially large list
|
#1380
First, a very simple,randomized sentence-plan-generator ( SPG ) generates a potentially large list of possible sentence plans for a given text-plan input. |
model,3-6-N01-1003,bq |
plan
</term>
. The
<term>
SPR
</term>
uses
<term>
|
ranking rules
|
</term>
automatically learned from
<term>
training
|
#1425
The SPR usesranking rules automatically learned from training data. |
measure(ment),13-7-N01-1003,bq |
select a
<term>
sentence plan
</term>
whose
<term>
|
rating
|
</term>
on average is only 5 % worse than
|
#1446
We show that the trained SPR learns to select a sentence plan whoserating on average is only 5% worse than the top human-ranked sentence plan. |
other,10-7-N01-1003,bq |
<term>
SPR
</term>
learns to select a
<term>
|
sentence plan
|
</term>
whose
<term>
rating
</term>
on average
|
#1443
We show that the trained SPR learns to select asentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
tech,9-2-N01-1003,bq |
paper , we present
<term>
SPoT
</term>
, a
<term>
|
sentence planner
|
</term>
, and a new methodology for automatically
|
#1344
In this paper, we present SPoT, asentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. |
tech,0-1-N01-1003,bq |
</term>
and
<term>
key prediction
</term>
.
<term>
|
Sentence planning
|
</term>
is a set of inter-related but distinct
|
#1293
Our algorithm reported more than 99% accuracy in both language identification and key prediction.Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
other,18-4-N01-1003,bq |
potentially large list of possible
<term>
|
sentence plans
|
</term>
for a given
<term>
text-plan input
</term>
|
#1392
First, a very simple, randomized sentence-plan-generator (SPG) generates a potentially large list of possiblesentence plans for a given text-plan input. |
other,12-5-N01-1003,bq |
SPR )
</term>
ranks the list of output
<term>
|
sentence plans
|
</term>
, and then selects the top-ranked
|
#1412
Second, the sentence-plan-ranker (SPR) ranks the list of outputsentence plans, and then selects the top-ranked plan. |
tech,15-1-N01-1003,bq |
but distinct tasks , one of which is
<term>
|
sentence scoping
|
</term>
, i.e. the choice of
<term>
syntactic
|
#1308
Sentence planning is a set of inter-related but distinct tasks, one of which issentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |
tech,3-5-N01-1003,bq |
text-plan input
</term>
. Second , the
<term>
|
sentence-plan-ranker ( SPR )
|
</term>
ranks the list of output
<term>
sentence
|
#1403
Second, thesentence-plan-ranker ( SPR ) ranks the list of output sentence plans, and then selects the top-ranked plan. |
other,40-1-N01-1003,bq |
how to combine them into one or more
<term>
|
sentences
|
</term>
. In this paper , we present
<term>
|
#1333
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementary speech acts and the decision of how to combine them into one or moresentences. |
other,26-1-N01-1003,bq |
syntactic structure
</term>
for elementary
<term>
|
speech acts
|
</term>
and the decision of how to combine
|
#1319
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice of syntactic structure for elementaryspeech acts and the decision of how to combine them into one or more sentences. |
tool,6-2-N01-1003,bq |
</term>
. In this paper , we present
<term>
|
SPoT
|
</term>
, a
<term>
sentence planner
</term>
,
|
#1341
In this paper, we presentSPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. |
tool,19-2-N01-1003,bq |
methodology for automatically training
<term>
|
SPoT
|
</term>
on the basis of
<term>
feedback
</term>
|
#1354
In this paper, we present SPoT, a sentence planner, and a new methodology for automatically trainingSPoT on the basis of feedback provided by human judges. |
tech,1-6-N01-1003,bq |
the top-ranked
<term>
plan
</term>
. The
<term>
|
SPR
|
</term>
uses
<term>
ranking rules
</term>
automatically
|
#1423
TheSPR uses ranking rules automatically learned from training data. |
tech,5-7-N01-1003,bq |
data
</term>
. We show that the trained
<term>
|
SPR
|
</term>
learns to select a
<term>
sentence
|
#1438
We show that the trainedSPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. |
other,22-1-N01-1003,bq |
scoping
</term>
, i.e. the choice of
<term>
|
syntactic structure
|
</term>
for elementary
<term>
speech acts
</term>
|
#1315
Sentence planning is a set of inter-related but distinct tasks, one of which is sentence scoping, i.e. the choice ofsyntactic structure for elementary speech acts and the decision of how to combine them into one or more sentences. |