MAIN: Multimodal Attention-based Fusion Networks for Diagnosis Prediction

Author(s):  
Ying An ◽  
Haojia Zhang ◽  
Yu Sheng ◽  
Jianxin Wang ◽  
Xianlai Chen
Keyword(s):  
2014 ◽  
Vol 27 (2) ◽  
pp. 91-110 ◽  
Author(s):  
Tómas Kristjánsson ◽  
Tómas Páll Thorvaldsson ◽  
Árni Kristjánsson

Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.


Sign in / Sign up

Export Citation Format

Share Document