scholarly journals Adaptation decorrelates shape representations

2018 ◽  
Author(s):  
Marcelo G. Mattar ◽  
Maria Olkkonen ◽  
Russell A. Epstein ◽  
Geoffrey K. Aguirre

AbstractPerception and neural responses are modulated by sensory history. Visual adaptation, an example of such an effect, has been hypothesized to improve stimulus discrimination by decorrelating responses across a set of neural units. While a central theoretical model, behavioral and neural evidence for this theory is limited and inconclusive. Here, we use a parametric 3D shape-space to test whether adaptation decorrelates shape representations in humans. In a behavioral experiment with 20 subjects, we find that adaptation to a shape class improves discrimination of subsequently presented stimuli with similar features. In a BOLD fMRI experiment with 10 subjects we observe that adaptation to a shape class decorrelates the multivariate representations of subsequently presented stimuli with similar features in object-selective cortex. These results support the long-standing proposal that adaptation improves perceptual discrimination and decorrelates neural representations, offering insights into potential underlying mechanisms.

2019 ◽  
Author(s):  
Mareike Bayer ◽  
Oksana Berhe ◽  
Isabel Dziobek ◽  
Tom Johnstone

AbstractThe faces of those most personally relevant to us are our primary source of social information, making their timely perception a priority. Recent research indicates that gender, age and identity of faces can be decoded from EEG/MEG data within 100ms. Yet the time course and neural circuitry involved in representing the personal relevance of faces remain unknown. We applied simultaneous EEG-fMRI to examine neural responses to emotional faces of female participants’ romantic partners, friends, and a stranger. Combining EEG and fMRI in cross-modal representational similarity analyses, we provide evidence that representations of personal relevance start prior to structural encoding at 100ms in visual cortex, but also in prefrontal and midline regions involved in value representation, and monitoring and recall of self-relevant information. Representations related to romantic love emerged after 300ms. Our results add to an emerging body of research that suggests that models of face perception need to be updated to account for rapid detection of personal relevance in cortical circuitry beyond the core face processing network.


2020 ◽  
Vol 10 (10) ◽  
pp. 667
Author(s):  
Mei-Yin Lin ◽  
Chia-Hsiung Cheng

Response inhibition is frequently examined using visual go/no-go tasks. Recently, the auditory go/no-go paradigm has been also applied to several clinical and aging populations. However, age-related changes in the neural underpinnings of auditory go/no-go tasks are yet to be elucidated. We used magnetoencephalography combined with distributed source imaging methods to examine age-associated changes in neural responses to auditory no-go stimuli. Additionally, we compared the performance of high- and low-performing older adults to explore differences in cortical activation. Behavioral performance in terms of response inhibition was similar in younger and older adult groups. Relative to the younger adults, the older adults exhibited reduced cortical activation in the superior and middle temporal gyrus. However, we did not find any significant differences in cortical activation between the high- and low-performing older adults. Our results therefore support the hypothesis that inhibition is reduced during aging. The variation in cognitive performance among older adults confirms the need for further study on the underlying mechanisms of inhibition.


2018 ◽  
Vol 30 (10) ◽  
pp. 1422-1432 ◽  
Author(s):  
Anne G. E. Collins

Learning to make rewarding choices in response to stimuli depends on a slow but steady process, reinforcement learning, and a fast and flexible, but capacity-limited process, working memory. Using both systems in parallel, with their contributions weighted based on performance, should allow us to leverage the best of each system: rapid early learning, supplemented by long-term robust acquisition. However, this assumes that using one process does not interfere with the other. We use computational modeling to investigate the interactions between the two processes in a behavioral experiment and show that working memory interferes with reinforcement learning. Previous research showed that neural representations of reward prediction errors, a key marker of reinforcement learning, were blunted when working memory was used for learning. We thus predicted that arbitrating in favor of working memory to learn faster in simple problems would weaken the reinforcement learning process. We tested this by measuring performance in a delayed testing phase where the use of working memory was impossible, and thus participant choices depended on reinforcement learning. Counterintuitively, but confirming our predictions, we observed that associations learned most easily were retained worse than associations learned slower: Using working memory to learn quickly came at the cost of long-term retention. Computational modeling confirmed that this could only be accounted for by working memory interference in reinforcement learning computations. These results further our understanding of how multiple systems contribute in parallel to human learning and may have important applications for education and computational psychiatry.


2020 ◽  
Vol 117 (23) ◽  
pp. 13145-13150 ◽  
Author(s):  
Insub Kim ◽  
Sang Wook Hong ◽  
Steven K. Shevell ◽  
Won Mok Shim

Color is a perceptual construct that arises from neural processing in hierarchically organized cortical visual areas. Previous research, however, often failed to distinguish between neural responses driven by stimulus chromaticity versus perceptual color experience. An unsolved question is whether the neural responses at each stage of cortical processing represent a physical stimulus or a color we see. The present study dissociated the perceptual domain of color experience from the physical domain of chromatic stimulation at each stage of cortical processing by using a switch rivalry paradigm that caused the color percept to vary over time without changing the retinal stimulation. Using functional MRI (fMRI) and a model-based encoding approach, we found that neural representations in higher visual areas, such as V4 and VO1, corresponded to the perceived color, whereas responses in early visual areas V1 and V2 were modulated by the chromatic light stimulus rather than color perception. Our findings support a transition in the ascending human ventral visual pathway, from a representation of the chromatic stimulus at the retina in early visual areas to responses that correspond to perceptually experienced colors in higher visual areas.


2018 ◽  
Vol 120 (1) ◽  
pp. 171-185 ◽  
Author(s):  
Seth Haney ◽  
Debajit Saha ◽  
Baranidharan Raman ◽  
Maxim Bazhenov

Adaptation of neural responses is ubiquitous in sensory systems and can potentially facilitate many important computational functions. Here we examined this issue with a well-constrained computational model of the early olfactory circuits. In the insect olfactory system, the responses of olfactory receptor neurons (ORNs) on the antennae adapt over time. We found that strong adaptation of sensory input is important for rapidly detecting a fresher stimulus encountered in the presence of other background cues and for faithfully representing its identity. However, when the overlapping odorants were chemically similar, we found that adaptation could alter the representation of these odorants to emphasize only distinguishing features. This work demonstrates novel roles for peripheral neurons during olfactory processing in complex environments. NEW & NOTEWORTHY Olfactory systems face the problem of distinguishing salient information from a complex olfactory environment. The neural representations of specific odor sources should be consistent regardless of the background. How are olfactory representations robust to varying environmental interference? We show that in locusts the extraction of salient information begins in the periphery. Olfactory receptor neurons adapt in response to odorants. Adaptation can provide a computational mechanism allowing novel odorant components to be highlighted during complex stimuli.


2017 ◽  
Vol 28 (7) ◽  
pp. 2351-2364 ◽  
Author(s):  
Hyeong-Dong Park ◽  
Fosco Bernasconi ◽  
Roy Salomon ◽  
Catherine Tallon-Baudry ◽  
Laurent Spinelli ◽  
...  

2019 ◽  
Author(s):  
Joseph B. Wekselblatt ◽  
Cristopher M. Niell

AbstractLearning can cause significant changes in neural responses to relevant stimuli, in addition to modulation due to task engagement. However, it is not known how different functional types of excitatory neurons contribute to these changes. To address this gap, we performed two-photon calcium imaging of excitatory neurons in layer 2/3 of mouse primary visual cortex before and after learning of a visual discrimination. We found that excitatory neurons show striking diversity in the temporal dynamics of their response to visual stimuli during the behavior, and based on this we classified them into transient, sustained, and suppressed groups. Notably, these functionally defined cell classes exhibit different visual stimulus selectivity and modulation by locomotion, and were differentially affected by training condition. In particular, we observed a decrease in the number of transient neurons responsive during behavior after learning, while both transient and sustained cells showed an increase in modulation due to task engagement after learning. The identification of functional diversity within the excitatory population, with distinct changes during learning and task engagement, provides insight into the cortical pathways that allow context-dependent neural representations.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chi Zhang ◽  
Xiao-Han Duan ◽  
Lin-Yuan Wang ◽  
Yong-Li Li ◽  
Bin Yan ◽  
...  

Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN—AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Minkang Kim ◽  
Jean Decety ◽  
Ling Wu ◽  
Soohyun Baek ◽  
Derek Sankey

AbstractOne means by which humans maintain social cooperation is through intervention in third-party transgressions, a behaviour observable from the early years of development. While it has been argued that pre-school age children’s intervention behaviour is driven by normative understandings, there is scepticism regarding this claim. There is also little consensus regarding the underlying mechanisms and motives that initially drive intervention behaviours in pre-school children. To elucidate the neural computations of moral norm violation associated with young children’s intervention into third-party transgression, forty-seven preschoolers (average age 53.92 months) participated in a study comprising of electroencephalographic (EEG) measurements, a live interaction experiment, and a parent survey about moral values. This study provides data indicating that early implicit evaluations, rather than late deliberative processes, are implicated in a child’s spontaneous intervention into third-party harm. Moreover, our findings suggest that parents’ values about justice influence their children’s early neural responses to third-party harm and their overt costly intervention behaviour.


2006 ◽  
Vol 95 (2) ◽  
pp. 1244-1262 ◽  
Author(s):  
Christopher DiMattina ◽  
Xiaoqin Wang

Most studies investigating neural representations of species-specific vocalizations in non-human primates and other species have involved studying neural responses to vocalization tokens. One limitation of such approaches is the difficulty in determining which acoustical features of vocalizations evoke neural responses. Traditionally used filtering techniques are often inadequate in manipulating features of complex vocalizations. Furthermore, the use of vocalization tokens cannot fully account for intrinsic stochastic variations of vocalizations that are crucial in understanding the neural codes for categorizing and discriminating vocalizations differing along multiple feature dimensions. In this work, we have taken a rigorous and novel approach to the study of species-specific vocalization processing by creating parametric “virtual vocalization” models of major call types produced by the common marmoset ( Callithrix jacchus). The main findings are as follows. 1) Acoustical parameters were measured from a database of the four major call types of the common marmoset. This database was obtained from eight different individuals, and for each individual, we typically obtained hundreds of samples of each major call type. 2) These feature measurements were employed to parameterize models defining representative virtual vocalizations of each call type for each of the eight animals as well as an overall species-representative virtual vocalization averaged across individuals for each call type. 3) Using the same feature-measurement that was applied to the vocalization samples, we measured acoustical features of the virtual vocalizations, including features not explicitly modeled and found the virtual vocalizations to be statistically representative of the callers and call types. 4) The accuracy of the virtual vocalizations was further confirmed by comparing neural responses to real and synthetic virtual vocalizations recorded from awake marmoset auditory cortex. We found a strong agreement between the responses to token vocalizations and their synthetic counterparts. 5) We demonstrated how these virtual vocalization stimuli could be employed to precisely and quantitatively define the notion of vocalization “selectivity” by using stimuli with parameter values both within and outside the naturally occurring ranges. We also showed the potential of the virtual vocalization stimuli in studying issues related to vocalization categorizations by morphing between different call types and individual callers.


Sign in / Sign up

Export Citation Format

Share Document