scholarly journals A Visual Attention Model Based on Eye Tracking in 3D Scene Maps

2021 ◽  
Vol 10 (10) ◽  
pp. 664
Author(s):  
Bincheng Yang ◽  
Hongwei Li

Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants’ gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.

2014 ◽  
Vol 49 ◽  
pp. 1-10 ◽  
Author(s):  
Ma Zhong ◽  
Xinbo Zhao ◽  
Xiao-chun Zou ◽  
James Z. Wang ◽  
Wenhu Wang

2012 ◽  
Vol 220-223 ◽  
pp. 1393-1397
Author(s):  
Li Bo Liu ◽  
Chun Jiang Zhao ◽  
Hua Rui Wu ◽  
Rong Hua Gao

Analyzing the crop growth status through leaf disease image is one of the hottest issues in agriculture and forestry fields currently. But the size of image gathered by digital camera is too large, the focus of this research is to zooming-out image at the condition of ensuring the main information which carried by the image to distort lower. Based on the further study of visual attention model proposed by Itti and Ma YF. This paper establishes visual attention and visual saliency map of rice blast and brown spot disease image, whose size is 4272*2878 pixels. Finally, determines the reduction scale of the corresponding effective target collection and provide a new way to reduce the plant leaf images.


Author(s):  
Wei Xiong ◽  
Yongli Xu ◽  
Yafei Lv ◽  
Libo Yao

Targets detection in synthetic aperture radar (SAR) remote sensing images, which is a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Besides, the ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of SAR image scene still remains a challenge. This paper analyzes the defects and shortcomings of traditional visual models applied to SAR images. Then a visual attention model designed for SAR images is proposed. The model draws the basic framework of classical ITTI model; selects and extracts the texture features and other features that can describe the SAR image better. We proposes a new algorithm for computing the local texture saliency of the input image, then the model constructs the corresponding saliency maps of features; Next, a new mechanism of feature fusion is adopted to replace the linear additive mechanism of classical models to obtain the overall saliency map; Finally, the gray-scale characteristics of focus of attention (FOA) in saliency map of all features are taken into account, our model choose the best saliency representation, Through the multi-scale competition strategy, the filter and threshold segmentation of the saliency maps can be used to select the salient regions accurately, thereby completing this operation for the visual saliency detection in SAR images. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X), Radarsat-2, are used to evaluate the performance of visual models. The results show that our model provides superior performance compared with classical visual models. By further contrasting with the classical visual models, Our model reduce the false alarm caused by speckle noise, and its detection speed is greatly improved, and it is increased by 25% to 45%.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica S. Oliveira ◽  
Felipe O. Franco ◽  
Mirian C. Revers ◽  
Andréia F. Silva ◽  
Joana Portolese ◽  
...  

AbstractAn advantage of using eye tracking for diagnosis is that it is non-invasive and can be performed in individuals with different functional levels and ages. Computer/aided diagnosis using eye tracking data is commonly based on eye fixation points in some regions of interest (ROI) in an image. However, besides the need for every ROI demarcation in each image or video frame used in the experiment, the diversity of visual features contained in each ROI may compromise the characterization of visual attention in each group (case or control) and consequent diagnosis accuracy. Although some approaches use eye tracking signals for aiding diagnosis, it is still a challenge to identify frames of interest when videos are used as stimuli and to select relevant characteristics extracted from the videos. This is mainly observed in applications for autism spectrum disorder (ASD) diagnosis. To address these issues, the present paper proposes: (1) a computational method, integrating concepts of Visual Attention Model, Image Processing and Artificial Intelligence techniques for learning a model for each group (case and control) using eye tracking data, and (2) a supervised classifier that, using the learned models, performs the diagnosis. Although this approach is not disorder-specific, it was tested in the context of ASD diagnosis, obtaining an average of precision, recall and specificity of 90%, 69% and 93%, respectively.


2009 ◽  
Vol 20 (12) ◽  
pp. 3240-3253 ◽  
Author(s):  
Guo-Min ZHANG ◽  
Jian-Ping YIN ◽  
En ZHU ◽  
Ling MAO

Sign in / Sign up

Export Citation Format

Share Document