Markov chain based computational visual attention model that learns from eye tracking data

2014 ◽  
Vol 49 ◽  
pp. 1-10 ◽  
Author(s):  
Ma Zhong ◽  
Xinbo Zhao ◽  
Xiao-chun Zou ◽  
James Z. Wang ◽  
Wenhu Wang
2021 ◽  
Vol 10 (10) ◽  
pp. 664
Author(s):  
Bincheng Yang ◽  
Hongwei Li

Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants’ gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica S. Oliveira ◽  
Felipe O. Franco ◽  
Mirian C. Revers ◽  
Andréia F. Silva ◽  
Joana Portolese ◽  
...  

AbstractAn advantage of using eye tracking for diagnosis is that it is non-invasive and can be performed in individuals with different functional levels and ages. Computer/aided diagnosis using eye tracking data is commonly based on eye fixation points in some regions of interest (ROI) in an image. However, besides the need for every ROI demarcation in each image or video frame used in the experiment, the diversity of visual features contained in each ROI may compromise the characterization of visual attention in each group (case or control) and consequent diagnosis accuracy. Although some approaches use eye tracking signals for aiding diagnosis, it is still a challenge to identify frames of interest when videos are used as stimuli and to select relevant characteristics extracted from the videos. This is mainly observed in applications for autism spectrum disorder (ASD) diagnosis. To address these issues, the present paper proposes: (1) a computational method, integrating concepts of Visual Attention Model, Image Processing and Artificial Intelligence techniques for learning a model for each group (case and control) using eye tracking data, and (2) a supervised classifier that, using the learned models, performs the diagnosis. Although this approach is not disorder-specific, it was tested in the context of ASD diagnosis, obtaining an average of precision, recall and specificity of 90%, 69% and 93%, respectively.


2009 ◽  
Vol 20 (12) ◽  
pp. 3240-3253 ◽  
Author(s):  
Guo-Min ZHANG ◽  
Jian-Ping YIN ◽  
En ZHU ◽  
Ling MAO

Sign in / Sign up

Export Citation Format

Share Document