scholarly journals A visual encoding model links magnetoencephalography signals to neural synchrony in human cortex

Author(s):  
Eline R. Kupers ◽  
Noah C. Benson ◽  
Jonathan Winawer

AbstractSynchronization of neuronal responses over large distances is hypothesized to be important for many cortical functions. However, no straightforward methods exist to estimate synchrony non-invasively in the living human brain. MEG and EEG measure the whole brain, but the sensors pool over large, overlapping cortical regions, obscuring the underlying neural synchrony. Here, we developed a model from stimulus to cortex to MEG sensors to disentangle neural synchrony from spatial pooling of the instrument. We find that synchrony across cortex has a surprisingly large and systematic effect on predicted MEG spatial topography. We then conducted visual MEG experiments and separated responses into stimulus-locked and broadband components. The stimulus-locked topography was similar to model predictions assuming synchronous neural sources, whereas the broadband topography was similar to model predictions assuming asynchronous sources. We infer that visual stimulation elicits two distinct types of neural responses, one highly synchronous and one largely asynchronous across cortex.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Michael Pereira ◽  
Pierre Megevand ◽  
Mi Xue Tan ◽  
Wenwen Chang ◽  
Shuo Wang ◽  
...  

AbstractA fundamental scientific question concerns the neural basis of perceptual consciousness and perceptual monitoring resulting from the processing of sensory events. Although recent studies identified neurons reflecting stimulus visibility, their functional role remains unknown. Here, we show that perceptual consciousness and monitoring involve evidence accumulation. We recorded single-neuron activity in a participant with a microelectrode in the posterior parietal cortex, while they detected vibrotactile stimuli around detection threshold and provided confidence estimates. We find that detected stimuli elicited neuronal responses resembling evidence accumulation during decision-making, irrespective of motor confounds or task demands. We generalize these findings in healthy volunteers using electroencephalography. Behavioral and neural responses are reproduced with a computational model considering a stimulus as detected if accumulated evidence reaches a bound, and confidence as the distance between maximal evidence and that bound. We conclude that gradual changes in neuronal dynamics during evidence accumulation relates to perceptual consciousness and perceptual monitoring in humans.


2008 ◽  
Vol 99 (1) ◽  
pp. 200-207 ◽  
Author(s):  
Olivia Andrea Masseck ◽  
Klaus-Peter Hoffmann

Single-unit recordings were performed from a retinorecipient pretectal area (corpus geniculatum laterale) in Scyliorhinus canicula. The function and homology of this nucleus has not been clarified so far. During visual stimulation with a random dot pattern, 45 (35%) neurons were found to be direction selective, 10 (8%) were axis selective (best neuronal responses to rotations in both directions around one particular stimulus axis), and 75 (58%) were movement sensitive. Direction-selective responses were found to the following stimulus directions (in retinal coordinates): temporonasal and nasotemporal horizontal movements, up- and downward vertical movements, and oblique movements. All directions of motion were represented equally by our sample of pretectal neurons. Additionally we tested the responses of 58 of the 130 neurons to random dot patterns rotating around the semicircular canal or body axes to investigate whether direction-selective visual information is mapped into vestibular coordinates in pretectal neurons of this chondrichthyan species. Again all rotational directions were represented equally, which argues against a direct transformation from a retinal to a vestibular reference frame. If a complete transformation had occurred, responses to rotational axes corresponding to the axes of the semicircular canals should have been overrepresented. In conclusion, the recorded direction-selective neurons in the Cgl are plausible detectors for retinal slip created by body rotations in all directions.


2010 ◽  
Vol 103 (3) ◽  
pp. 1467-1477 ◽  
Author(s):  
John C. Taylor ◽  
Alison J. Wiggett ◽  
Paul E. Downing

People are easily able to perceive the human body across different viewpoints, but the neural mechanisms underpinning this ability are currently unclear. In three experiments, we used functional MRI (fMRI) adaptation to study the view-invariance of representations in two cortical regions that have previously been shown to be sensitive to visual depictions of the human body—the extrastriate and fusiform body areas (EBA and FBA). The BOLD response to sequentially presented pairs of bodies was treated as an index of view invariance. Specifically, we compared trials in which the bodies in each image held identical poses (seen from different views) to trials containing different poses. EBA and FBA adapted to identical views of the same pose, and both showed a progressive rebound from adaptation as a function of the angular difference between views, up to ∼30°. However, these adaptation effects were eliminated when the body stimuli were followed by a pattern mask. Delaying the mask onset increased the response (but not the adaptation effect) in EBA, leaving FBA unaffected. We interpret these masking effects as evidence that view-dependent fMRI adaptation is driven by later waves of neuronal responses in the regions of interest. Finally, in a whole brain analysis, we identified an anterior region of the left inferior temporal sulcus (l-aITS) that responded linearly to stimulus rotation, but showed no selectivity for bodies. Our results show that body-selective cortical areas exhibit a similar degree of view-invariance as other object selective areas—such as the lateral occipitotemporal area (LO) and posterior fusiform gyrus (pFs).


Author(s):  
Pedro Tomás ◽  
IST TU Lisbon ◽  
Aleksandar Ilic ◽  
Leonel Sousa

When analyzing the neuronal code, neuroscientists usually perform extra-cellular recordings of neuronal responses (spikes). Since the size of the microelectrodes used to perform these recordings is much larger than the size of the cells, responses from multiple neurons are recorded by each micro-electrode. Thus, the obtained response must be classified and evaluated, in order to identify how many neurons were recorded, and to assess which neuron generated each spike. A platform for the mass-classification of neuronal responses is proposed in this chapter, employing data-parallelism for speeding up the classification of neuronal responses. The platform is built in a modular way, supporting multiple web-interfaces, different back-end environments for parallel computing or different algorithms for spike classification. Experimental results on the proposed platform show that even for an unbalanced data set of neuronal responses the execution time was reduced of about 45%. For balanced data sets, the platform may achieve a reduction in execution time equal to the inverse of the number of back-end computational elements.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Olivia Rose ◽  
James Johnson ◽  
Binxu Wang ◽  
Carlos R. Ponce

AbstractEarly theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual cortex respond to edges and curvature. Still, it remains unclear what other information-rich features are encoded by neurons in more anterior cortical regions (e.g., inferotemporal cortex). Here, we use a generative deep neural network to synthesize images guided by neuronal responses from across the visuocortical hierarchy, using floating microelectrode arrays in areas V1, V4 and inferotemporal cortex of two macaque monkeys. We hypothesize these images (“prototypes”) represent such predicted information-rich features. Prototypes vary across areas, show moderate complexity, and resemble salient visual attributes and semantic content of natural images, as indicated by the animals’ gaze behavior. This suggests the code for object recognition represents compressed features of behavioral relevance, an underexplored aspect of efficient coding.


2018 ◽  
Author(s):  
Mona Rosenke ◽  
Nicolas Davidenko ◽  
Kalanit Grill-Spector ◽  
Kevin S. Weiner

ABSTRACTWe have an amazing ability to categorize objects in the world around us. Nevertheless, how cortical regions in human ventral temporal cortex (VTC), which is critical for categorization, support this behavioral ability, is largely unknown. Here, we examined the relationship between neural responses and behavioral performance during the categorization of morphed silhouettes of faces and hands, which are animate categories processed in cortically adjacent regions in VTC. Our results reveal that the combination of neural responses from VTC face- and body-selective regions more accurately explains behavioral categorization than neural responses from either region alone. Furthermore, we built a model that predicts a person’s behavioral performance using estimated parameters of brain-behavioral relationships from a different group of people. We further show that this brain-behavioral model generalizes to adjacent face- and body-selective regions in lateral occipito-temporal cortex. Thus, while face- and body-selective regions are located within functionally-distinct domain-specific networks, cortically adjacent regions from both networks likely integrate neural responses to resolve competing and perceptually ambiguous information from both categories.


2020 ◽  
Vol 30 (9) ◽  
pp. 4882-4898
Author(s):  
Mona Rosenke ◽  
Nicolas Davidenko ◽  
Kalanit Grill-Spector ◽  
Kevin S Weiner

Abstract We have an amazing ability to categorize objects in the world around us. Nevertheless, how cortical regions in human ventral temporal cortex (VTC), which is critical for categorization, support this behavioral ability, is largely unknown. Here, we examined the relationship between neural responses and behavioral performance during the categorization of morphed silhouettes of faces and hands, which are animate categories processed in cortically adjacent regions in VTC. Our results reveal that the combination of neural responses from VTC face- and body-selective regions more accurately explains behavioral categorization than neural responses from either region alone. Furthermore, we built a model that predicts a person’s behavioral performance using estimated parameters of brain–behavior relationships from a different group of people. Moreover, we show that this brain–behavior model generalizes to adjacent face- and body-selective regions in lateral occipitotemporal cortex. Thus, while face- and body-selective regions are located within functionally distinct domain-specific networks, cortically adjacent regions from both networks likely integrate neural responses to resolve competing and perceptually ambiguous information from both categories.


2011 ◽  
Vol 105 (4) ◽  
pp. 1825-1834 ◽  
Author(s):  
Pei Liang ◽  
Roland Kern ◽  
Rafael Kurtz ◽  
Martin Egelhaaf

It is still unclear how sensory systems efficiently encode signals with statistics as experienced by animals in the real world and what role adaptation plays during normal behavior. Therefore, we studied the performance of visual motion-sensitive neurons of blowflies, the horizontal system neurons, with optic flow that was reconstructed from the head trajectories of semi-free-flying flies. To test how motion adaptation is affected by optic flow dynamics, we manipulated the seminatural optic flow by targeted modifications of the flight trajectories and assessed to what extent neuronal responses to an object located close to the flight trajectory depend on adaptation dynamics. For all types of adapting optic flow object-induced response increments were stronger in the adapted compared with the nonadapted state. Adaptation with optic flow characterized by the typical alternation between translational and rotational segments produced this effect but also adaptation with optic flow that lacked these distinguishing features and even pure rotation at a constant angular velocity. The enhancement of object-induced response increments had a direction-selective component because preferred-direction rotation and natural optic flow were more efficient adaptors than null-direction rotation. These results indicate that natural dynamics of optic flow is not a basic requirement to adapt neurons in a specific, presumably functionally beneficial way. Our findings are discussed in the light of adaptation mechanisms proposed on the basis of experiments previously done with conventional experimenter-defined stimuli.


Sign in / Sign up

Export Citation Format

Share Document