scholarly journals Salient objects dominate the central fixation bias when orienting toward images

2021 ◽  
Vol 21 (8) ◽  
pp. 23
Author(s):  
Christian Wolf ◽  
Markus Lappe
PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e2946 ◽  
Author(s):  
Jiawei Xu ◽  
Shigang Yue ◽  
Federica Menchinelli ◽  
Kun Guo

Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the ‘ground truth’ to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.


2017 ◽  
Vol 17 (13) ◽  
pp. 3 ◽  
Author(s):  
Lars O. M. Rothkegel ◽  
Hans A. Trukenbrod ◽  
Heiko H. Schütt ◽  
Felix A. Wichmann ◽  
Ralf Engbert

2016 ◽  
Vol 16 (12) ◽  
pp. 331
Author(s):  
Lars Rothkegel ◽  
Hans Trukenbrod ◽  
Heiko Schott ◽  
Felix Wichmann ◽  
Ralf Engbert

BMJ ◽  
1961 ◽  
Vol 2 (5267) ◽  
pp. 1610-1612 ◽  
Author(s):  
J. Scully
Keyword(s):  

Author(s):  
Herbert Moskowitz ◽  
Satanand Sharma

Twelve males were tested under a control and two alcohol treatments in a perimeter apparatus used for testing peripheral vision. They were required to fixate either on a steady-state central fixation light and detect peripheral lights, or to count blinks produced by the cessations of the fixation light and to detect peripheral lights. Alcohol produced an impairment of peripheral vision only under conditions where the central fixation light blinked and thus required information processing. No performance decrement occurred when the central light did not blink. The results suggest that alcohol interferes with central information processing rather than peripheral sensory mechanisms.


1969 ◽  
Vol 51 (2) ◽  
pp. 471-493 ◽  
Author(s):  
M. F. LAND

1. Movements made by the principal eyes of jumping spiders (Phidippus and Metaphidippus spp.) have been investigated using an ophthalmoscopic technique which permits simultaneous observation and stimulation of the retinal surface. 2. The eye-movements are produced by six muscles. Four are attached to the carapace, and displace each retina latero-medially and dorso-ventrally. The remaining pair are thin bands of muscle which encircle the eye-tube. These twist the eye-tube, rotating the retina about the visual axis (torsion). 3. The nerve supplying these muscles contains only six axons. Each axon terminates in one of the six muscles. 4. Four types of eye-movements are observed. These are spontaneous activity, saccades, tracking and scanning. All movements are usually conjugate. 5. Spontaneous activity consists of a very variable, periodic side-to-side motion of the retinae. It is associated with states of high excitability, and occurs whether or not there is any structure in the field of view. 6. Saccades occur when a small stimulus (e.g. a dark dot) is presented to, or moved upon, the retinae of either the principal eyes or the antero-lateral eyes. In a saccade the retinae move towards the image of the target so that they come to rest with their central regions fixated on the target. 7. If the target moves the retinae track it, maintaining central fixation. 8. Scanning normally follows a saccade. It consists of an oscillatory, side-to-side movement of the retinae across the stimulus, with a period of 1-2 sec., and a simultaneous torsional movement in which the retinae partially rotate about the visual axes, through an angle of approximately 50° and with a period of 5-15 sec. 9. Jumping spiders distinguish other jumping spiders from potential prey by the geometry of their legs. It is suggested that scanning is a pattern-recognition procedure in which the torsional movements are concerned with the spatial alignment of line or edge detectors, and the horizontal component with providing relative motion between these detectors and the stationary stimulus.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 195-195
Author(s):  
A M Johns ◽  
B J Rogers ◽  
R A Eagle

In order to investigate how cyclopean motion is coded by the visual system, the points of subjective equality (PSEs) were measured for (i) speed, (ii) spatial frequency (SF), and (iii) temporal frequency (TF) as a function of peak-to-trough disparity amplitude for cyclopean corrugations. Two panels (3.0 deg × 7.0 deg) of dynamic random-dot stereograms were located 0.5 deg on either side of a central fixation spot. Each panel contained a horizontally oriented sinusoidal cyclopean corrugation whose SF, TF, and disparity amplitude were under experimental control. On each trial, the cyclopean corrugations were displaced vertically in opposite directions. Subjects judged which panel contained the higher SF, TF, or speed depending on condition. The reference stimulus was a sinusoidal corrugation with SF=0.4 cycles deg−1, TF=0.8 Hz, speed of 2.0 deg s−1, and peak-to-trough disparity amplitude of 8 min arc around fixation. We found that, as the peak-to-trough disparity amplitude of the test stimulus increased from 2 min arc to 32 min arc, the PSE for speed decreased from 2.21 deg s−1 to 1.67 deg s−1, compared to a reference speed of 2.00 deg s−1. However, across the same levels of disparity amplitude, the PSE for SF remained constant and the PSE for TF varied but with no consistent pattern. Thus, perceived speed increases with increased disparity amplitude. As all levels of disparity amplitude were above threshold, cyclopean speed cannot be detected by a purely ‘feature-tracking’ mechanism. These metamers and the poor TF matching performance suggest that cyclopean speed is coded by a sparse number of temporal mechanisms.


Vision ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 16
Author(s):  
Blair ◽  
Ristic

Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention’s dynamic flexibility in facilitating human behavior.


Sign in / Sign up

Export Citation Format

Share Document