A probabilistic saliency model with memory-guided top-down cues for free-viewing

Author(s):  
Yan Hua ◽  
Zhicheng Zhao ◽  
Hu Tian ◽  
Xin Guo ◽  
Anni Cai
Author(s):  
Chih-Yang Chen ◽  
Denis Matrov ◽  
Richard Edmund Veale ◽  
Hirotaka Onoe ◽  
Masatoshi Yoshida ◽  
...  

Saccades are stereotypic behaviors whose investigation improves our understanding of how primate brains implement precise motor control. Furthermore, saccades offer an important window into the cognitive and attentional state of the brain. Historically, saccade studies have largely relied on macaque. However, the cortical network giving rise to the saccadic command is difficult to study in macaque because relevant cortical areas lie in deep sulci and are difficult to access. Recently, a New World monkey -the marmoset- has garnered attention as an alternative to macaque because of advantages including its smooth cortical surface. However, adoption of marmoset for oculomotor research has been limited due to a lack of in-depth descriptions of marmoset saccade kinematics and their ability to perform psychophysical tasks. Here, we directly compare free-viewing and visually-guided behavior of marmoset, macaque, and human engaged in identical tasks under similar conditions. In video free-viewing task, all species exhibited qualitatively similar saccade kinematics up to 25º in amplitude although with different parameters. Furthermore, the conventional bottom-up saliency model predicted gaze targets at similar rates for all species. We further verified their visually-guided behavior by training them with step and gap saccade tasks. In the step paradigm, marmoset did not show shorter saccade reaction time for upward saccades whereas macaque and human did. In the gap paradigm, all species showed similar gap effect and express saccades. Our results suggest that the marmoset can serve as a model for oculomotor, attentional, and cognitive research while being aware of their difference from macaque or human.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Makiese Mibulumukini

Human gaze is not directed to the same part of an image when lighting conditions change. Current saliency models do not consider light level analysis during their bottom-up processes. In this paper, we introduce a new saliency model which better mimics physiological and psychological processes of our visual attention in case of free-viewing task (bottom-up process). This model analyzes lighting conditions with the aim of giving different weights to color wavelengths. The resulting saliency measure performs better than a lot of popular cognitive approaches.


2019 ◽  
Vol 19 (1) ◽  
pp. 11 ◽  
Author(s):  
James Tanner ◽  
Laurent Itti

Author(s):  
John M. Hicks ◽  
Ashley A. Cain ◽  
Jeremiah D. Still

Previous research has shown a computational model of visual saliency can predict where people fixate in cluttered web pages (Masciocchi & Still, 2013). Over time, web site designers are moving towards simpler, less cluttered webpages to improve aesthetics and to make searches more efficient. Even with simpler interfaces, determining a saliency ranking among interface elements is a difficult task. Also, it is unclear whether the traditionally employed saliency model (Itti, Koch, & Niebur, 1998) can be applied to simpler interfaces. To examine the model’s ability to predict fixations in simple web pages we compared a distribution of observed fixations to a conservative measure of chance performance (a shuffled distribution). Simplicity was determined by using two visual clutter models (Rosenholz, Li, & Nakano, 2007). We found under free-viewing conditions that the saliency model was able to predict fixations within less cluttered web pages.


2009 ◽  
Vol 1 (3) ◽  
Author(s):  
Ozgur E. Akman ◽  
Richard A. Clement ◽  
David S. Broomhead ◽  
Sabira Mannan ◽  
Ian Moorhead ◽  
...  

The selection of fixation targets involves a combination of top-down and bottom-up processing. The role of bottom-up processing can be enhanced by using multistable stimuli because their constantly changing appearance seems to depend predominantly on stimulusdriven factors. We used this approach to investigate whether visual processing models based on V1 need to be extended to incorporate specific computations attributed to V4. Eye movements of 8 subjects were recorded during free viewing of the Marroquin pattern in which illusory circles appear and disappear. Fixations were concentrated on features arranged in concentric rings within the pattern. Comparison with simulated fixation data demonstrated that the saliency of these features can be predicted with appropriate weighting of lateral connections in existing V1 models.


2015 ◽  
Vol 15 (12) ◽  
pp. 1245
Author(s):  
Li Zhang ◽  
Rudiger von der Heydt
Keyword(s):  
Top Down ◽  

2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


Sign in / Sign up

Export Citation Format

Share Document