tech,0-1-P01-1056,bq |
</term>
of the
<term>
logical form
</term>
.
<term>
|
Techniques for automatically training
|
</term>
modules of a
<term>
natural language
|
#2012
Here we emphasize the connection to Montague semantics which can be viewed as a formal computation of the logical form.Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
tech,7-1-P01-1056,bq |
automatically training
</term>
modules of a
<term>
|
natural language generator
|
</term>
have recently been proposed , but
|
#2019
Techniques for automatically training modules of anatural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
measure(ment),22-1-P01-1056,bq |
fundamental concern is whether the
<term>
|
quality
|
</term>
of
<term>
utterances
</term>
produced
|
#2034
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether thequality of utterances produced with trainable components can compete with hand-crafted template-based or rule-based approaches. |
other,27-1-P01-1056,bq |
<term>
utterances
</term>
produced with
<term>
|
trainable components
|
</term>
can compete with
<term>
hand-crafted
|
#2039
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced withtrainable components can compete with hand-crafted template-based or rule-based approaches. |
tech,32-1-P01-1056,bq |
components
</term>
can compete with
<term>
|
hand-crafted template-based or rule-based approaches
|
</term>
. In this paper We experimentally
|
#2044
Techniques for automatically training modules of a natural language generator have recently been proposed, but a fundamental concern is whether the quality of utterances produced with trainable components can compete withhand-crafted template-based or rule-based approaches. |
tech,7-2-P01-1056,bq |
paper We experimentally evaluate a
<term>
|
trainable sentence planner
|
</term>
for a
<term>
spoken dialogue system
|
#2057
In this paper We experimentally evaluate atrainable sentence planner for a spoken dialogue system by eliciting subjective human judgments. |
tech,12-2-P01-1056,bq |
trainable sentence planner
</term>
for a
<term>
|
spoken dialogue system
|
</term>
by eliciting
<term>
subjective human
|
#2062
In this paper We experimentally evaluate a trainable sentence planner for aspoken dialogue system by eliciting subjective human judgments. |
other,17-2-P01-1056,bq |
dialogue system
</term>
by eliciting
<term>
|
subjective human judgments
|
</term>
. In order to perform an exhaustive
|
#2067
In this paper We experimentally evaluate a trainable sentence planner for a spoken dialogue system by elicitingsubjective human judgments. |
tech,12-3-P01-1056,bq |
exhaustive comparison , we also evaluate a
<term>
|
hand-crafted template-based generation component
|
</term>
, two
<term>
rule-based sentence planners
|
#2083
In order to perform an exhaustive comparison, we also evaluate ahand-crafted template-based generation component, two rule-based sentence planners, and two baseline sentence planners. |
tech,18-3-P01-1056,bq |
template-based generation component
</term>
, two
<term>
|
rule-based sentence planners
|
</term>
, and two
<term>
baseline sentence
|
#2089
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, tworule-based sentence planners, and two baseline sentence planners. |
tech,24-3-P01-1056,bq |
sentence planners
</term>
, and two
<term>
|
baseline sentence planners
|
</term>
. We show that the
<term>
trainable
|
#2095
In order to perform an exhaustive comparison, we also evaluate a hand-crafted template-based generation component, two rule-based sentence planners, and twobaseline sentence planners. |
tech,4-4-P01-1056,bq |
planners
</term>
. We show that the
<term>
|
trainable sentence planner
|
</term>
performs better than the
<term>
rule-based
|
#2103
We show that thetrainable sentence planner performs better than the rule-based systems and the baselines, and as well as the hand-crafted system. |
tech,11-4-P01-1056,bq |
planner
</term>
performs better than the
<term>
|
rule-based systems
|
</term>
and the
<term>
baselines
</term>
, and
|
#2110
We show that the trainable sentence planner performs better than therule-based systems and the baselines, and as well as the hand-crafted system. |
tech,15-4-P01-1056,bq |
<term>
rule-based systems
</term>
and the
<term>
|
baselines
|
</term>
, and as well as the
<term>
hand-crafted
|
#2114
We show that the trainable sentence planner performs better than the rule-based systems and thebaselines, and as well as the hand-crafted system. |
tech,22-4-P01-1056,bq |
baselines
</term>
, and as well as the
<term>
|
hand-crafted system
|
</term>
. We describe a set of
<term>
supervised
|
#2121
We show that the trainable sentence planner performs better than the rule-based systems and the baselines, and as well as thehand-crafted system. |