scholarly journals Watching the Effects of Gravity. Vestibular Cortex and the Neural Representation of “Visual” Gravity

2021 ◽  
Vol 15 ◽  
Author(s):  
Sergio Delle Monache ◽  
Iole Indovina ◽  
Myrka Zago ◽  
Elena Daprati ◽  
Francesco Lacquaniti ◽  
...  

Gravity is a physical constraint all terrestrial species have adapted to through evolution. Indeed, gravity effects are taken into account in many forms of interaction with the environment, from the seemingly simple task of maintaining balance to the complex motor skills performed by athletes and dancers. Graviceptors, primarily located in the vestibular otolith organs, feed the Central Nervous System with information related to the gravity acceleration vector. This information is integrated with signals from semicircular canals, vision, and proprioception in an ensemble of interconnected brain areas, including the vestibular nuclei, cerebellum, thalamus, insula, retroinsula, parietal operculum, and temporo-parietal junction, in the so-called vestibular network. Classical views consider this stage of multisensory integration as instrumental to sort out conflicting and/or ambiguous information from the incoming sensory signals. However, there is compelling evidence that it also contributes to an internal representation of gravity effects based on prior experience with the environment. This a priori knowledge could be engaged by various types of information, including sensory signals like the visual ones, which lack a direct correspondence with physical gravity. Indeed, the retinal accelerations elicited by gravitational motion in a visual scene are not invariant, but scale with viewing distance. Moreover, the “visual” gravity vector may not be aligned with physical gravity, as when we watch a scene on a tilted monitor or in weightlessness. This review will discuss experimental evidence from behavioral, neuroimaging (connectomics, fMRI, TMS), and patients’ studies, supporting the idea that the internal model estimating the effects of gravity on visual objects is constructed by transforming the vestibular estimates of physical gravity, which are computed in the brainstem and cerebellum, into internalized estimates of virtual gravity, stored in the vestibular cortex. The integration of the internal model of gravity with visual and non-visual signals would take place at multiple levels in the cortex and might involve recurrent connections between early visual areas engaged in the analysis of spatio-temporal features of the visual stimuli and higher visual areas in temporo-parietal-insular regions.

2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


2018 ◽  
Vol 119 (1) ◽  
pp. 73-83 ◽  
Author(s):  
Shawn D. Newlands ◽  
Ben Abbatematteo ◽  
Min Wei ◽  
Laurel H. Carney ◽  
Hongge Luan

Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Francesco Lacquaniti ◽  
Gianfranco Bosco ◽  
Silvio Gravano ◽  
Iole Indovina ◽  
Barbara La Scaleia ◽  
...  

Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.


2009 ◽  
Vol 79 (5) ◽  
pp. 271-280 ◽  
Author(s):  
Y. Morito ◽  
H.C. Tanabe ◽  
T. Kochiyama ◽  
N. Sadato

2019 ◽  
Author(s):  
E. Mika Diamanti ◽  
Charu Bai Reddy ◽  
Sylvia Schröder ◽  
Tomaso Muzzu ◽  
Kenneth D. Harris ◽  
...  

During navigation, the visual responses of neurons in primary visual cortex (V1) are modulated by the animal’s spatial position. Here we show that this spatial modulation is similarly present across multiple higher visual areas but largely absent in the main thalamic pathway into V1. Similar to hippocampus, spatial modulation in visual cortex strengthens with experience and requires engagement in active behavior. Active navigation in a familiar environment, therefore, determines spatial modulation of visual signals starting in the cortex.


2018 ◽  
Author(s):  
Juan Chen ◽  
Irene Sperandio ◽  
Molly J. Henry ◽  
Melvyn A Goodale

AbstractOur visual system affords a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that real changes in distance can modulate the firing rate of neurons in primary visual cortex and even subcortical structures, which raises an intriguing possibility that the required integration for size constancy may occur in the initial visual processing in V1 or even earlier. In humans, however, EEG and brain imaging studies have typically manipulated the apparent (not real) distance of stimuli using pictorial illusions, in which the cues to distance are sparse and not congruent. Here, we physically moved the monitor to different distances from the observer, a more ecologically valid paradigm that emulates what happens in everyday life. Using this paradigm in combination with electroencephalography (EEG), we were able for the first time to examine how the computation of size constancy unfolds in real time under real-world viewing conditions. We showed that even when all distance cues were available and congruent, size constancy took about 150 ms to emerge in the activity of visual cortex. The 150-ms interval exceeds the time required for the visual signals to reach V1, but is consistent with the time typically associated with later processing within V1 or recurrent processing from higher-level visual areas. Therefore, this finding provides unequivocal evidence that size constancy does not occur during the initial signal processing in V1 or earlier, but requires subsequent processing, just like any other feature binding mechanisms.


2014 ◽  
Vol 26 (3) ◽  
pp. 490-500 ◽  
Author(s):  
Yaara Erez ◽  
Galit Yovel

Target objects required for goal-directed behavior are typically embedded within multiple irrelevant objects that may interfere with their encoding. Most neuroimaging studies of high-level visual cortex have examined the representation of isolated objects, and therefore, little is known about how surrounding objects influence the neural representation of target objects. To investigate the effect of different types of clutter on the distributed responses to target objects in high-level visual areas, we used fMRI and manipulated the type of clutter. Specifically, target objects (i.e., a face and a house) were presented either in isolation, in the presence of a homogeneous (identical objects from another category) clutter (“pop-out” display), or in the presence of a heterogeneous (different objects) clutter, while participants performed a target identification task. Using multivoxel pattern analysis (MVPA) we found that in the posterior fusiform object area a heterogeneous but not homogeneous clutter interfered with decoding of the target objects. Furthermore, multivoxel patterns evoked by isolated objects were more similar to multivoxel patterns evoked by homogenous compared with heterogeneous clutter in the lateral occipital and posterior fusiform object areas. Interestingly, there was no effect of clutter on the neural representation of the target objects in their category-selective areas, such as the fusiform face area and the parahippocampal place area. Our findings show that the variation among irrelevant surrounding objects influences the neural representation of target objects in the object general area, but not in object category-selective cortex, where the representation of target objects is invariant to their surroundings.


2009 ◽  
Vol 101 (4) ◽  
pp. 1867-1875 ◽  
Author(s):  
David B. T. McMahon ◽  
Carl R. Olson

How does the brain represent a red circle? One possibility is that there is a specialized and possibly time-consuming process whereby the attributes of shape and color, carried by separate populations of neurons in low-order visual cortex, are bound together into a unitary neural representation. Another possibility is that neurons in high-order visual cortex are selective, by virtue of their bottom-up input from low-order visual areas, for particular conjunctions of shape and color. A third possibility is that they simply sum shape and color signals linearly. We tested these ideas by measuring the responses of inferotemporal cortex neurons to sets of stimuli in which two attributes—shape and color—varied independently. We find that a few neurons exhibit conjunction selectivity but that in most neurons the influences of shape and color sum linearly. Contrary to the idea of conjunction coding, few neurons respond selectively to a particular combination of shape and color. Contrary to the idea that binding requires time, conjunction signals, when present, occur as early as feature signals. We argue that neither conjunction selectivity nor a specialized feature binding process is necessary for the effective representation of shape–color combinations.


2005 ◽  
Vol 17 (11) ◽  
pp. 1714-1727 ◽  
Author(s):  
Jillian H. Fecteau ◽  
Douglas P. Munoz

How do visual signals evolve from early to late stages in sensory processing? We explored this question by examining two neural correlates of spatial attention. The capture of attention and inhibition of return refer to the initial advantage and subsequent disadvantage to respond to a visual target that follows an irrelevant visual cue at the same location. In the intermediate layers of the superior colliculus (a region that receives input from late stages in visual processing), both behavioral effects link to changes in the neural representation of the target: strong target-related activity correlates with the capture of attention and weak target-related activity correlates with inhibition of return. Contrasting these correlates with those obtained in the superficial layers (a functionally distinct region that receives input from early stages in visual processing), we show that the target-related activity of neurons in the intermediate layers was the best predictor of orienting behavior, although dramatic changes in the target-related response were observed in both subregions. We describe the important consequences of these findings for understanding the neural basis of the capture of attention and inhibition of return and interpreting changes in neural activity more generally.


Sign in / Sign up

Export Citation Format

Share Document