scholarly journals Crowding Effects across Depth Are Fixation-Centered for Defocused Flankers and Observer-Centered for Defocused Targets

2020 ◽  
Vol 10 (9) ◽  
pp. 596
Author(s):  
Lisa V. Eberhardt ◽  
Anke Huckauf

Depth needs to be considered to understand visual information processing in cluttered environments in the wild. Since differences in depth depend on current gaze position, eye movements were avoided by short presentations in a real depth setup. Thus, allowing only peripheral vision, crowding was tested. That is, the impairment of peripheral target recognition by the presence of nearby flankers was measured. Real depth was presented by a half-transparent mirror that aligned the displays of two orthogonally arranged, distance-adjustable screens. Fixation depth was at a distance of 190 cm, defocused depth planes were presented either near or far, in front of or behind the fixation depth, all within the depth of field. In Experiments 1 and 2, flankers were presented defocused, while the to-be-identified targets were on the fixation depth plane. In Experiments 3–5, targets were presented defocused, while the flankers were kept on the fixation depth plane. Results for defocused flankers indicate increased crowding effects with increased flanker distance from the target at focus (near to far). However, for defocused targets, crowding for targets in front of the focus as compared to behind was increased. Thus, defocused targets produce decreased crowding with increased target distance from the observer. To conclude, the effects of flankers in depth seem to be centered around fixation, while effects of target depth seem to be observer-centered.

2005 ◽  
Vol 24 (4) ◽  
pp. 339-352
Author(s):  
Guillaume Giraudet ◽  
Christian Corbé ◽  
Corinne Roumes

ABSTRACTAge-related macular degeneration (ARMD) is a frequent cause of vision loss among people over age of 60. It is an aging process involving a progressive degradation of the central retina. It does not induce total blindness, since it does not affect the peripheral vision. Nonetheless, it makes difficult to read, drive, and perform all daily activities requiring fine details perception. Low-vision care consists in inducing an eccentric fixation so that relevant visual targets impact an unaffected retinal locus. It is necessary but not sufficient to enhance visual extraction. The present work aims to draw the attention of low-vision professionals to the necessity of developing new re-education tools. Beyond the perceptual re-education linked to an optimization of visual information extraction, a cognitive re-education should also be provided in order to enhance the interpretation processes. Indeed, the spatial-frequency properties of the visual world no longer match patient perceptual habits. The visually impaired person has to learn again to use these new sensory data in an optimal way. Contextual information can be a precious help in this learning process. An experimental study involving young people provides elements for another method of low-vision care, in terms of visual cognitive re-education.


2021 ◽  
Author(s):  
Parisa Abedi Khoozani ◽  
Vishal Bharmauria ◽  
Adrian Schuetz ◽  
Richard P. Wildes ◽  
John Douglas Crawford

Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are segregated initially, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a Convolutional Neural Network (CNN) of the visual system with a Multilayer Perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings, and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric-egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.


Author(s):  
Benjamin Wolfe ◽  
Ben D. Sawyer ◽  
Ruth Rosenholtz

Objective The aim of this study is to describe information acquisition theory, explaining how drivers acquire and represent the information they need. Background While questions of what drivers are aware of underlie many questions in driver behavior, existing theories do not directly address how drivers in particular and observers in general acquire visual information. Understanding the mechanisms of information acquisition is necessary to build predictive models of drivers’ representation of the world and can be applied beyond driving to a wide variety of visual tasks. Method We describe our theory of information acquisition, looking to questions in driver behavior and results from vision science research that speak to its constituent elements. We focus on the intersection of peripheral vision, visual attention, and eye movement planning and identify how an understanding of these visual mechanisms and processes in the context of information acquisition can inform more complete models of driver knowledge and state. Results We set forth our theory of information acquisition, describing the gap in understanding that it fills and how existing questions in this space can be better understood using it. Conclusion Information acquisition theory provides a new and powerful way to study, model, and predict what drivers know about the world, reflecting our current understanding of visual mechanisms and enabling new theories, models, and applications. Application Using information acquisition theory to understand how drivers acquire, lose, and update their representation of the environment will aid development of driver assistance systems, semiautonomous vehicles, and road safety overall.


2018 ◽  
Vol 120 (5) ◽  
pp. 2522-2531 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Jason P. Gallivan ◽  
J. Randall Flanagan

During goal-directed reaching, people typically direct their gaze to the target before the start of the hand movement and maintain fixation until the hand arrives. This gaze strategy improves reach accuracy in two ways. It enables the use of central vision at the end of movement, and it allows the use of extraretinal information in guiding the hand to the target. Here we tested whether fixating the reach target further facilitates reach accuracy by optimizing the use of peripheral vision in detecting, and rapidly responding to, reach errors during the ongoing movement. We examined automatic visuomotor corrections in response to displacements of the cursor representing the hand position as a function of gaze fixation location during unimanual goal-directed reaching. Eight fixation targets were positioned either in line with, or at different angles relative to, the straight-ahead movement direction (manipulation of fixation angle), and at different distances from the location of the visual perturbation (manipulation of fixation distance). We found that corrections were fastest and strongest when gaze was directed at the reach target compared with when gaze was directed to a different location in the workspace. We found that the gain of the visuomotor response was strongly affected by fixation angle, and to a smaller extent by fixation distance, with lower gains as the angle or distance increased. We submit that fixating the reach target improves reach accuracy by facilitating rapid visuomotor responses to reach errors viewed in peripheral vision. NEW & NOTEWORTHY It is well known that directing gaze to the reach target allows the use of foveal visual feedback and extraretinal information to improve the accuracy of reaching movements. Here we demonstrate that target fixation also optimizes rapid visuomotor corrections to reach errors viewed in peripheral vision, with the angle of gaze relative to the hand movement being a critical determinant in the gain of the visuomotor response.


2010 ◽  
Vol 9 (8) ◽  
pp. 1038-1038
Author(s):  
M. P. S. To ◽  
I. D. Gilchrist ◽  
T. Troscianko ◽  
P. G. Lovell ◽  
D. J. Tolhurst

2021 ◽  
Author(s):  
James Daniel Dunn ◽  
Victor Perrone de Lima Varela ◽  
Victoria Ida Nicholls ◽  
Michaell Papinutto ◽  
David White ◽  
...  

People’s ability to recognize faces varies to a surprisingly large extent and these differences are hereditary. But cognitive and perceptual processing giving rise to these differences remain poorly understood. Here we compared visual sampling of 10 super-recognizers – individuals that achieve the highest levels of accuracy in face recognition tasks – to typical viewers. Participants were asked to learn, and later recognize, a set of unfamiliar faces while their gaze position was recorded. They viewed faces through ‘spotlight’ apertures varying in size, where the face on the screen was modified in real-time to constrict the visual information displayed to the participant around their gaze position. Higher recognition accuracy in super-recognizers was only observed when at least 36% of the face was visible. We also identified qualitative differences in their visual sampling that can explain their superior recognition accuracy: (1) less systematic focus on the eye region; (2) more fixations to the central region of faces; (3) greater visual exploration of faces in general. These differences were observed in both natural and spotlight viewing conditions, but were most apparent when learning faces and not during recognition. Critically, this suggests that superior recognition performance is founded on enhanced encoding of faces into memory rather than memory retention. Together, our results point to a process whereby super-recognizers construct a more robust memory trace by accumulating samples of complex visual information across successive eye movements.


2019 ◽  
Author(s):  
Suzette Fernandes ◽  
Monica Castelhano

When you walk into a large room, you perceive visual information that is both close to you in depth and farther in the background. Here, we investigated how initial scene representations are affected by information across depth. We examined the role of background and foreground information on scene gist by using Chimera scenes (images with foreground and background from different scene categories). Across three experiments, we found a Foreground Bias in which foreground information initially had a strong influence on the interpretation of the scene. This bias persisted when the initial fixation position was on the scene background and when the task was changed to emphasize scene information. We conclude that the Foreground Bias arises from initial processing of scenes for understanding and suggests that scene information closer to the observer is initially prioritized. We discuss the implications for theories of scene and depth perception.


1995 ◽  
Vol 73 (1) ◽  
pp. 361-372 ◽  
Author(s):  
C. Ghez ◽  
J. Gordon ◽  
M. F. Ghilardi

1. The aim of this study was to determine how vision of a cursor indicating hand position on a computer screen or vision of the limb itself improves the accuracy of reaching movements in patients deprived of limb proprioception due to large-fiber sensory neuropathy. In particular, we wished to ascertain the contribution of such information to improved planning rather than to feedback corrections. We analyzed spatial errors and hand trajectories of reaching movements made by subjects moving a hand-held cursor on a digitizing tablet while viewing targets displayed on a computer screen. The errors made when movements were performed without vision of their arm or of a screen cursor were compared with errors made when this information was available concurrently or prior to movement. 2. Both monitoring the screen cursor and seeing their limb in peripheral vision during movement improved the accuracy of the patients' movements. Improvements produced by seeing the cursor during movement are attributable simply to feedback corrections. However, because the target was not present in the actual workspace, improvements associated with vision of the limb must involve more complex corrective mechanisms. 3. Significant improvements in performance also occurred in trials without vision that were performed after viewing the limb at rest or during movements. In particular, prior vision of the limb in motion improved the ability of patients to vary the duration of movements in different directions so as to compensate for the inertial anisotropy of the limb. In addition, there were significant reductions in directional errors, path curvature, and late secondary movements. Comparable improvements in extent, direction, and curvature were produced when subjects could see the screen cursor during alternate movements to targets in different directions. 4. The effects of viewing the limb were transient and decayed during a period of minutes once vision of the limb was no longer available. 5. It is proposed that the improvements in performance produced after vision of the limb were mediated by the visual updating of internal models of the limb. Vision of the limb at rest may provide configuration information while vision of the limb in motion provides additional dynamic information. Vision of the cursor and the resulting ability to correct ongoing movements, however, is considered primarily to provide information about the dynamic properties of the limb and its response to neural commands.


2021 ◽  
pp. 095679762098446
Author(s):  
Suzette Fernandes ◽  
Monica S. Castelhano

When you walk into a large room, you perceive visual information that is both close to you in depth and farther in the background. Here, we investigated how initial scene representations are affected by information across depth. We examined the role of background and foreground information on scene gist by using chimera scenes (images with a foreground and background from different scene categories). Across three experiments, we found a foreground bias: Information in the foreground initially had a strong influence on the interpretation of the scene. This bias persisted when the initial fixation position was on the scene background and when the task was changed to emphasize scene information. We concluded that the foreground bias arises from initial processing of scenes for understanding and suggests that scene information closer to the observer is initially prioritized. We discuss the implications for theories of scene and depth perception.


2011 ◽  
Vol 7 (4) ◽  
pp. 499-501 ◽  
Author(s):  
Emily Baird ◽  
Eva Kreiss ◽  
William Wcislo ◽  
Eric Warrant ◽  
Marie Dacke

To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis , flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta , we repeated the experiments on day-active bumble-bees ( Bombus terrestris ). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta —like their day-active relatives—rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects.


Sign in / Sign up

Export Citation Format

Share Document