other,11-3-E06-1022,ak |
<term>
meeting context
</term>
can aid
<term>
|
classifiers ' performances
|
</term>
. Both
<term>
classifiers
</term>
perform
|
#11226
Then, we explore whether information about meeting context can aidclassifiers ' performances. |
|
tech,11-1-E06-1022,ak |
four-participants face-to-face meetings using
<term>
|
Bayesian Network and Naive Bayes classifiers
|
</term>
. First , we investigate how well
|
#11183
We present results on addressee identification in four-participants face-to-face meetings usingBayesian Network and Naive Bayes classifiers. |
|
other,8-5-E06-1022,ak |
little gain from information about
<term>
|
meeting context
|
</term>
. Most state-of-the-art
<term>
evaluation
|
#11257
The classifiers show little gain from information aboutmeeting context. |
|
other,14-4-E06-1022,ak |
utterance features
</term>
are combined with
<term>
|
speaker 's gaze information
|
</term>
. The
<term>
classifiers
</term>
show
|
#11244
Both classifiers perform the best when conversational context and utterance features are combined withspeaker 's gaze information. |
|
other,17-2-E06-1022,ak |
act
</term>
can be predicted based on
<term>
|
gaze , utterance and conversational context features
|
</term>
. Then , we explore whether information
|
#11207
First, we investigate how well the addressee of a dialogue act can be predicted based ongaze , utterance and conversational context features. |
|
other,6-4-E06-1022,ak |
classifiers
</term>
perform the best when
<term>
|
conversational context
|
</term>
and
<term>
utterance features
</term>
|
#11236
Both classifiers perform the best whenconversational context and utterance features are combined with speaker's gaze information. |
|
other,9-4-E06-1022,ak |
<term>
conversational context
</term>
and
<term>
|
utterance features
|
</term>
are combined with
<term>
speaker 's
|
#11239
Both classifiers perform the best when conversational context andutterance features are combined with speaker's gaze information. |
|
other,7-3-E06-1022,ak |
explore whether information about
<term>
|
meeting context
|
</term>
can aid
<term>
classifiers ' performances
|
#11222
Then, we explore whether information aboutmeeting context can aid classifiers' performances. |
|
other,10-2-E06-1022,ak |
well the
<term>
addressee
</term>
of a
<term>
|
dialogue act
|
</term>
can be predicted based on
<term>
gaze
|
#11200
First, we investigate how well the addressee of adialogue act can be predicted based on gaze, utterance and conversational context features. |
|
tech,4-1-E06-1022,ak |
algorithm
</term>
. We present results on
<term>
|
addressee identification
|
</term>
in four-participants face-to-face
|
#11176
We present results onaddressee identification in four-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers. |
|
other,7-2-E06-1022,ak |
First , we investigate how well the
<term>
|
addressee
|
</term>
of a
<term>
dialogue act
</term>
can
|
#11197
First, we investigate how well theaddressee of a dialogue act can be predicted based on gaze, utterance and conversational context features. |
|
tech,1-4-E06-1022,ak |
classifiers ' performances
</term>
. Both
<term>
|
classifiers
|
</term>
perform the best when
<term>
conversational
|
#11231
Bothclassifiers perform the best when conversational context and utterance features are combined with speaker's gaze information. |
|
tech,1-5-E06-1022,ak |
speaker 's gaze information
</term>
. The
<term>
|
classifiers
|
</term>
show little gain from information
|
#11250
Theclassifiers show little gain from information about meeting context. |
|