P04-3029 |
actual visualization status to the
|
multimodal fusion
|
module . This report is used
|
P07-2001 |
Fusion Strategies MIMUS approach to
|
multimodal fusion
|
involves combining inputs coming
|
H01-1071 |
, handwriting recognition and
|
multimodal fusion
|
. A vision module is trained
|
C02-1035 |
consists of two sub-processes :
|
multimodal fusion
|
and context-based inference .
|
P04-3029 |
word lattice semantically . The
|
multimodal fusion
|
module maintains the dialogue
|
C02-1035 |
only supports unification based
|
multimodal fusion
|
, but also enables contextbased
|
D15-1303 |
unimodal models ' votes . For the
|
multimodal fusion
|
using this baseline method only
|
W07-1801 |
speech recognition , parsing and
|
multimodal fusion
|
( Ericsson et al. , 2006 ) .
|
C02-1035 |
and context-based inference .
|
Multimodal fusion
|
fuses intention and attention
|
C02-1035 |
those inputs . For example , after
|
multimodal fusion
|
, the conversation unit for U3
|
P13-1096 |
we plan to explore alternative
|
multimodal fusion
|
methods , such as decision-level
|
W10-4207 |
are processed and combined by a
|
multimodal fusion
|
component based on ( Giuliani
|
W08-1117 |
a pipelined architecture with
|
multimodal fusion
|
and fission modules encapsulating
|
P11-2058 |
model originally introduced for
|
multimodal fusion
|
( Ozkan et al. , 2010 ) . In
|
W08-1117 |
semantically interpreted input to the
|
multimodal fusion
|
module ( interpretation manager
|
N04-1004 |
In many cases , speech/gesture
|
multimodal fusion
|
works in a very similar way ,
|
W14-5404 |
the investigation of alternative
|
multimodal fusion
|
approaches , such as the one
|
W04-2804 |
consists of two main blocks , the
|
multimodal FUSION
|
( see section 3.2 ) which is
|
P06-4015 |
semantically interpreted input to the
|
multimodal fusion
|
module that interprets them in
|
P04-3029 |
its genre ( see figure 4 ) . The
|
multimodal fusion
|
module receives this representation
|