The spotlight of attention: shifting, resizing and splitting receptive fields when processing visual motion

e-Neuroforum ◽  
2012 ◽  
Vol 18 (3) ◽  
Author(s):  
S. Treue ◽  
J.C. Martinez-Trujillo

AbstractIn the visual system receptive fields repre­sent the spatial selectivity of neurons for a given set of visual inputs. Their invariance is thought to be caused by a hardwired in­put configuration, which ensures a stable ‘la­beled line’ code for the spatial position of vi­sual stimuli. On the other hand, changeable receptive fields can provide the visual system with flexibility for allocating processing re­sources in space. The allocation of spatial at­tention, often referred to as the spotlight of attention, is a behavioral equivalent of visu­al receptive fields. It dynamically modulates the spatial sensitivity to visual information as a function of the current attentional focus of the organism. Here we focus on the brain sys­tem for encoding visual motion information and review recent findings documenting in­teractions between spatial attention and re­ceptive fields in the visual cortex of primates. Such interactions create a careful balance be­tween the benefits of invariance with those derived from the attentional modulation of information processing according to the cur­rent behavioral goals.

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


2021 ◽  
Author(s):  
Diane Rekow ◽  
Jean-Yves Baudouin ◽  
Karine Durand ◽  
Arnaud Leleu

Visual categorization is the brain ability to rapidly and automatically respond to widely variable visual inputs in a category-selective manner (i.e., distinct responses between categories and similar responses within categories). Whether category-selective neural responses are purely visual or can be influenced by other sensory modalities remains unclear. Here, we test whether odors modulate visual categorization, expecting that odors facilitate the neural categorization of congruent visual objects, especially when the visual category is ambiguous. Scalp electroencephalogram (EEG) was recorded while natural images depicting various objects were displayed in rapid 12-Hz streams (i.e., 12 images / second) and variable exemplars of a target category (either human faces, cars, or facelike objects in dedicated sequences) were interleaved every 9th stimulus to tag category-selective responses at 12/9 = 1.33 Hz in the EEG frequency spectrum. During visual stimulation, participants (N = 26) were implicitly exposed to odor contexts (either body, gasoline or baseline odors) and performed an orthogonal cross-detection task. We identify clear category-selective responses to every category over the occipito-temporal cortex, with the largest response for human faces and the lowest for facelike objects. Critically, body odor boosts the response to the ambiguous facelike objects (i.e., either perceived as nonface objects or faces) over the right hemisphere, especially for participants reporting their presence post-stimulation. By contrast, odors do not significantly modulate other category-selective responses, nor the general visual response recorded at 12 Hz, revealing a specific influence on the categorization of congruent ambiguous stimuli. Overall, these findings support the view that the brain actively uses cues from the different senses to readily categorize visual inputs, and that olfaction, which is generally considered as poorly functional in humans, is well placed to disambiguate visual information.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


2008 ◽  
Vol 364 (1515) ◽  
pp. 331-339 ◽  
Author(s):  
Andrew J King

The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.


Author(s):  
Brian Rogers

‘The physiology and anatomy of the visual system’ describes what we have learned from neurophysiology and anatomy over the past eighty years and what this tells us about the meaning of the circuits involved in visual information processing. It explains how psychologists and physiologists use the terms ‘mechanism’ and ‘process’. For physiologists, a mechanism is linked to the actions of individual neurons, neural pathways, and the ways in which the neurons are connected up. For psychologists, the term is typically used to describe the processes the neural circuits may carry out. The human retina is described with explanations of lateral inhibition, receptive fields, and feature detectors as well as the visual cortex and different visual pathways.


Author(s):  
Jason McCarthy ◽  
Patricia Castro ◽  
Rachael Cottier ◽  
Joseph Buttell ◽  
Qadeer Arshad ◽  
...  

AbstractA coherent perception of spatial orientation is key in maintaining postural control. To achieve this the brain must access sensory inputs encoding both the body and the head position and integrate them with incoming visual information. Here we isolated the contribution of proprioception to verticality perception and further investigated whether changing the body position without moving the head can modulate visual dependence—the extent to which an individual relies on visual cues for spatial orientation. Spatial orientation was measured in ten healthy individuals [6 female; 25–47 years (SD 7.8 years)] using a virtual reality based subjective visual vertical (SVV) task. Individuals aligned an arrow to their perceived gravitational vertical, initially against a static black background (10 trials), and then in other conditions with clockwise and counterclockwise background rotations (each 10 trials). In all conditions, subjects were seated first in the upright position, then with trunk tilted 20° to the right, followed by 20° to the left while the head was always aligned vertically. The SVV error was modulated by the trunk position, and it was greater when the trunk was tilted to the left compared to right or upright trunk positions (p < 0.001). Likewise, background rotation had an effect on SVV errors as these were greater with counterclockwise visual rotation compared to static background and clockwise roll motion (p < 0.001). Our results show that the interaction between neck and trunk proprioception can modulate how visual inputs affect spatial orientation.


Neuroforum ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Klaudia P. Szatko ◽  
Katrin Franke

Abstract To provide a compact and efficient input to the brain, sensory systems separate the incoming information into parallel feature channels. In the visual system, parallel processing starts in the retina. Here, the image is decomposed into multiple retinal output channels, each selective for a specific set of visual features like motion, contrast, or edges. In this article, we will summarize recent findings on the functional organization of the retinal output, the neural mechanisms underlying its diversity, and how single visual features, like color, are extracted by the retinal network. Unraveling how the retina – as the first stage of the visual system – filters the visual input is an important step toward understanding how visual information processing guides behavior.


Author(s):  
Min Guo ◽  
Yinghua Yu ◽  
Jiajia Yang ◽  
Jinglong Wu

To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.


1990 ◽  
Vol 5 (5) ◽  
pp. 489-495 ◽  
Author(s):  
Douglas R. Wylie ◽  
Barrie J. Frost

AbstractPrevious electrophysiological studies have shown that neurons in the nucleus of the basal optic root (nBOR) of the pigeon respond best to wholefield stimuli moving slowly in a particular direction in the contralateral visual field. In this study, we have found that some nBOR neurons respond to wholefield stimulation of both eyes. These binocular neurons have spatially separate receptive fields in both visual fields. Some binocular neurons prefer the same direction of wholefield motion in both eyes, and thus respond best to wholefield visual motion which would result from translation movements of the bird, either ascent, descent, or forward and backward motion. Other neurons prefer opposite directions of wholefield motion in each eye and therefore respond optimally to wholefield visual motion simulating rotational movements of the bird, either roll or yaw. These binocular neurons may play a crucial part in the locomotor behavior of the pigeon by providing visual information distinguishing translational and rotational movements.


Sign in / Sign up

Export Citation Format

Share Document