other,19-2-E06-1022,bq |
predicted based on
<term>
gaze
</term>
,
<term>
|
utterance
|
</term>
and
<term>
conversational context features
|
#10272
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze,utterance and conversational context features. |
other,13-3-E06-1022,bq |
</term>
can aid
<term>
classifiers
</term>
'
<term>
|
performances
|
</term>
. Both
<term>
classifiers
</term>
perform
|
#10291
Then, we explore whether information about meeting context can aid classifiers'performances. |
other,10-2-E06-1022,bq |
well the
<term>
addressee
</term>
of a
<term>
|
dialogue act
|
</term>
can be predicted based on
<term>
gaze
|
#10263
First, we investigate how well the addressee of adialogue act can be predicted based on gaze, utterance and conversational context features. |
other,8-5-E06-1022,bq |
<term>
gain
</term>
from information about
<term>
|
meeting context
|
</term>
. Most state-of-the-art
<term>
evaluation
|
#10320
The classifiers show little gain from information aboutmeeting context. |
other,7-3-E06-1022,bq |
explore whether information about
<term>
|
meeting context
|
</term>
can aid
<term>
classifiers
</term>
'
<term>
|
#10285
Then, we explore whether information aboutmeeting context can aid classifiers' performances. |
tech,11-3-E06-1022,bq |
<term>
meeting context
</term>
can aid
<term>
|
classifiers
|
</term>
'
<term>
performances
</term>
. Both
<term>
|
#10289
Then, we explore whether information about meeting context can aidclassifiers' performances. |
other,9-4-E06-1022,bq |
<term>
conversational context
</term>
and
<term>
|
utterance features
|
</term>
are combined with
<term>
speaker 's
|
#10302
Both classifiers perform the best when conversational context andutterance features are combined with speaker's gaze information. |
tech,14-1-E06-1022,bq |
using
<term>
Bayesian Network
</term>
and
<term>
|
Naive Bayes classifiers
|
</term>
. First , we investigate how well
|
#10249
We present results on addressee identification in four-participants face-to-face meetings using Bayesian Network andNaive Bayes classifiers. |
other,21-2-E06-1022,bq |
gaze
</term>
,
<term>
utterance
</term>
and
<term>
|
conversational context features
|
</term>
. Then , we explore whether information
|
#10274
First, we investigate how well the addressee of a dialogue act can be predicted based on gaze, utterance andconversational context features. |
tech,1-4-E06-1022,bq |
</term>
'
<term>
performances
</term>
. Both
<term>
|
classifiers
|
</term>
perform the best when
<term>
conversational
|
#10294
Bothclassifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
other,7-1-E06-1022,bq |
<term>
addressee identification
</term>
in
<term>
|
four-participants face-to-face meetings
|
</term>
using
<term>
Bayesian Network
</term>
|
#10242
We present results on addressee identification infour-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers. |
other,4-5-E06-1022,bq |
<term>
classifiers
</term>
show little
<term>
|
gain
|
</term>
from information about
<term>
meeting
|
#10316
The classifiers show littlegain from information about meeting context. |
other,17-2-E06-1022,bq |
act
</term>
can be predicted based on
<term>
|
gaze
|
</term>
,
<term>
utterance
</term>
and
<term>
conversational
|
#10270
First, we investigate how well the addressee of a dialogue act can be predicted based ongaze, utterance and conversational context features. |
tech,4-1-E06-1022,bq |
algorithm
</term>
. We present results on
<term>
|
addressee identification
|
</term>
in
<term>
four-participants face-to-face
|
#10239
We present results onaddressee identification in four-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers. |
tech,1-5-E06-1022,bq |
speaker 's gaze information
</term>
. The
<term>
|
classifiers
|
</term>
show little
<term>
gain
</term>
from
|
#10313
Theclassifiers show little gain from information about meeting context. |
other,7-2-E06-1022,bq |
First , we investigate how well the
<term>
|
addressee
|
</term>
of a
<term>
dialogue act
</term>
can
|
#10260
First, we investigate how well theaddressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
tech,11-1-E06-1022,bq |
face-to-face meetings
</term>
using
<term>
|
Bayesian Network
|
</term>
and
<term>
Naive Bayes classifiers
</term>
|
#10246
We present results on addressee identification in four-participants face-to-face meetings usingBayesian Network and Naive Bayes classifiers. |
other,6-4-E06-1022,bq |
classifiers
</term>
perform the best when
<term>
|
conversational context
|
</term>
and
<term>
utterance features
</term>
|
#10299
Both classifiers perform the best whenconversational context and utterance features are combined with speaker's gaze information. |
other,14-4-E06-1022,bq |
utterance features
</term>
are combined with
<term>
|
speaker 's gaze information
|
</term>
. The
<term>
classifiers
</term>
show
|
#10307
Both classifiers perform the best when conversational context and utterance features are combined withspeaker 's gaze information. |