scholarly journals Neurally-constrained modeling of human gaze strategies in a change blindness task

2021 ◽  
Vol 17 (8) ◽  
pp. e1009322
Author(s):  
Akshay Jagatap ◽  
Simran Purokayastha ◽  
Hritik Jain ◽  
Devarajan Sridharan

Despite possessing the capacity for selective attention, we often fail to notice the obvious. We investigated participants’ (n = 39) failures to detect salient changes in a change blindness experiment. Surprisingly, change detection success varied by over two-fold across participants. These variations could not be readily explained by differences in scan paths or fixated visual features. Yet, two simple gaze metrics–mean duration of fixations and the variance of saccade amplitudes–systematically predicted change detection success. We explored the mechanistic underpinnings of these results with a neurally-constrained model based on the Bayesian framework of sequential probability ratio testing, with a posterior odds-ratio rule for shifting gaze. The model’s gaze strategies and success rates closely mimicked human data. Moreover, the model outperformed a state-of-the-art deep neural network (DeepGaze II) with predicting human gaze patterns in this change blindness task. Our mechanistic model reveals putative rational observer search strategies for change detection during change blindness, with critical real-world implications.

2019 ◽  
Author(s):  
Akshay Jagatap ◽  
Hritik Jain ◽  
Simran Purokayastha ◽  
Devarajan Sridharan

AbstractVisual attention enables us to engage selectively with the most important events in the world around us. Yet, sometimes, we fail to notice salient events. “Change blindness” – the surprising inability to detect and identify salient changes that occur in flashing visual images – enables measuring such failures in a laboratory setting. We discovered that human participants (n=39) varied widely (by two-fold) in their ability to detect changes when tested on a laboratory change blindness task. To understand the reasons for these differences in change detection abilities, we characterized eye-movement patterns and gaze strategies as participants scanned these images. Surprisingly, we found no systematic differences between scan paths, fixation maps or saccade patterns between participants who were successful at detecting changes, versus those who were not. Yet, two low-level gaze metrics – the mean fixation duration and the variance of saccade amplitudes – systematically predicted change detection success. To explain the mechanism by which these gaze metrics could influence performance, we developed a neurally constrained model, based on the Bayesian framework of sequential probability ratio testing (SPRT), which simulated gaze strategies of successful and unsuccessful observers. The model’s ability to detect changes varied systematically with mean fixation duration and saccade amplitude variance, closely mimicking observations in the human data. Moreover, the model’s success rates correlated robustly with human observers’ success rates, across images. Our model explains putative human attention mechanisms during change blindness tasks and provides key insights into effective strategies for shifting gaze and attention for artificial agents navigating dynamic, crowded environments.Author SummaryOur brain has the remarkable capacity to pay attention, selectively, to the most important events in the world around us. Yet, sometimes, we fail spectacularly to notice even the most salient events. We tested this phenomenon in the laboratory with a change-blindness experiment, by having participants freely scan and detect changes across discontinuous image pairs. Participants varied widely in their ability to detect these changes. Surprisingly, their success correlated with differences in low-level gaze metrics. A Bayesian model of eye movements, which incorporated neural constraints on stimulus encoding, could explain the reason for these differences, and closely mimicked human performance in this change blindness task. The model’s gaze strategies provide relevant insights for artificial, neuromorphic agents navigating dynamic, crowded environments.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jordan Navarro ◽  
Otto Lappi ◽  
François Osiurak ◽  
Emma Hernout ◽  
Catherine Gabaude ◽  
...  

AbstractActive visual scanning of the scene is a key task-element in all forms of human locomotion. In the field of driving, steering (lateral control) and speed adjustments (longitudinal control) models are largely based on drivers’ visual inputs. Despite knowledge gained on gaze behaviour behind the wheel, our understanding of the sequential aspects of the gaze strategies that actively sample that input remains restricted. Here, we apply scan path analysis to investigate sequences of visual scanning in manual and highly automated simulated driving. Five stereotypical visual sequences were identified under manual driving: forward polling (i.e. far road explorations), guidance, backwards polling (i.e. near road explorations), scenery and speed monitoring scan paths. Previously undocumented backwards polling scan paths were the most frequent. Under highly automated driving backwards polling scan paths relative frequency decreased, guidance scan paths relative frequency increased, and automation supervision specific scan paths appeared. The results shed new light on the gaze patterns engaged while driving. Methodological and empirical questions for future studies are discussed.


2021 ◽  
Vol 11 ◽  
Author(s):  
Wang Xiang

To investigate whether implicit detection occurs uniformly during change blindness with single or combination feature stimuli, and whether implicit detection is affected by exposure duration and delay, two one-shot change detection experiments are designed. The implicit detection effect is measured by comparing the reaction times (RTs) of baseline trials, in which stimulus exhibits no change and participants report “same,” and change blindness trials, in which the stimulus exhibits a change but participants report “same.” If the RTs of blindness trials are longer than those of baseline trials, implicit detection has occurred. The strength of the implicit detection effect was measured by the difference in RTs between the baseline and change blindness trials, where the larger the difference, the stronger the implicit detection effect. In both Experiments 1 and 2, the results showed that the RTs of change blindness trials were significantly longer than those of baseline trials. Whether under set size 4, 6, or 8, the RTs of the change blindness trials were significantly longer than those in the baseline trials. In Experiment 1, the difference between the baseline trials’ RTs and change blindness trials’ RTs of the single features was significantly larger than that of the combination features. However, in Experiment 2, the difference between the baseline trials’ RTs and the change blindness trials’ RTs of single features was significantly smaller than that of the combination features. In Experiment 1a, when the exposure duration was shorter, the difference between the baseline and change blindness trials’ RTs was smaller. In Experiment 2, when the delay was longer, the difference between the two trials’ RTs was larger. These results suggest that regardless of whether the change occurs in a single or a combination of features and whether there is a long exposure duration or delay, implicit detection occurs uniformly during the change blindness period. Moreover, longer exposure durations and delays strengthen the implicit detection effect. Set sizes had no significant impact on implicit detection.


2006 ◽  
Vol 16 (20) ◽  
pp. 2066-2072 ◽  
Author(s):  
Leila Reddy ◽  
Rodrigo Quian Quiroga ◽  
Patrick Wilken ◽  
Christof Koch ◽  
Itzhak Fried

Author(s):  
Yoshihide Koyama ◽  
Tetsuo Hattori ◽  
Yoshiro Imai ◽  
Yo Horikawa ◽  
Yusuke Kawakami ◽  
...  

2010 ◽  
Vol 10 (7) ◽  
pp. 169-169 ◽  
Author(s):  
T. W. Boyer ◽  
C. Yu ◽  
T. Smith ◽  
B. I. Bertenthal

2000 ◽  
Author(s):  
Brian J. Scholl ◽  
Daniel J. Simons ◽  
Daniel T. Levin

Sign in / Sign up

Export Citation Format

Share Document