scholarly journals Audiovisual adaptation is expressed in spatial and decisional codes

2021 ◽  
Author(s):  
Máté Aller ◽  
Agoston Mihalik ◽  
Uta Noppeney

AbstractThe brain adapts dynamically to the changing sensory statistics of its environment. The neural circuitries and representations that support this cross-sensory plasticity remain unknown. We combined psychophysics and model-based representational fMRI and EEG to characterize how the adult human brain adapts to misaligned audiovisual signals. We show that audiovisual adaptation moulds regional BOLD-responses and fine-scale activity patterns in a widespread network from Heschl’s gyrus to dorsolateral prefrontal cortices. Crucially, audiovisual recalibration relies on distinct spatial and decisional codes that are expressed with opposite gradients and timecourses across the auditory processing hierarchy. Early activity patterns in auditory cortices encode sounds in a continuous space that flexibly adapts to misaligned visual inputs. Later activity patterns in frontoparietal cortices code decisional uncertainty consistent with these spatial transformations. Our findings demonstrate that regions throughout the auditory processing hierarchy multiplex spatial and decisional codes to adapt flexibly to the changing sensory statistics in the environment.

2012 ◽  
Vol 25 (0) ◽  
pp. 184-185
Author(s):  
Sonja Schall ◽  
Stefan J. Kiebel ◽  
Burkhard Maess ◽  
Katharina von Kriegstein

There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants () briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.


2018 ◽  
Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Carolyn McGettigan ◽  
Lúcia Garrido

AbstractFace-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face and voice tokens of the same identity. According to two distinct models, such representations could exist either in multimodal brain regions (Campanella and Belin, 2007) or in face-selective brain regions via direct coupling between face- and voice-selective regions (von Kriegstein et al., 2005). To test the predictions of these two models, we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in multimodal, face-selective and voice-selective brain regions. We used representational similarity analysis (RSA) to compare the representational geometries of face- and voice-elicited person-identities, and to investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We found no matching geometries for faces and voices in any brain regions. However, we showed crossmodal generalisation of the pattern discriminants in the multimodal right posterior superior temporal sulcus (rpSTS), suggesting a modality-general person-identity representation in this region. Importantly, the rpSTS showed invariant representations of face- and voice-identities, in that discriminants were trained and tested on independent face videos (different viewpoint, lighting, background) and voice recordings (different vocalizations). Our findings support the Multimodal Processing Model, which proposes that face and voice information is integrated in multimodal brain regions.Significance statementIt is possible to identify a familiar person either by looking at their face or by listening to their voice. Using fMRI and representational similarity analysis (RSA) we show that the right posterior superior sulcus (rpSTS), a multimodal brain region that responds to both faces and voices, contains representations that can distinguish between familiar people independently of whether we are looking at their face or listening to their voice. Crucially, these representations generalised across different particular face videos and voice recordings. Our findings suggest that identity information from visual and auditory processing systems is combined and integrated in the multimodal rpSTS region.


2015 ◽  
Vol 21 (3) ◽  
pp. 203-213 ◽  
Author(s):  
Jonathan C. Ipser ◽  
Gregory G. Brown ◽  
Amanda Bischoff-Grethe ◽  
Colm G. Connolly ◽  
Ronald J. Ellis ◽  
...  

AbstractHIV-associated cognitive impairments are prevalent, and are consistent with injury to both frontal cortical and subcortical regions of the brain. The current study aimed to assess the association of HIV infection with functional connections within the frontostriatal network, circuitry hypothesized to be highly vulnerable to HIV infection. Fifteen HIV-positive and 15 demographically matched control participants underwent 6 min of resting-state functional magnetic resonance imaging (RS-fMRI). Multivariate group comparisons of age-adjusted estimates of connectivity within the frontostriatal network were derived from BOLD data for dorsolateral prefrontal cortex (DLPFC), dorsal caudate and mediodorsal thalamic regions of interest. Whole-brain comparisons of group differences in frontostriatal connectivity were conducted, as were pairwise tests of connectivity associations with measures of global cognitive functioning and clinical and immunological characteristics (nadir and current CD4 count, duration of HIV infection, plasma HIV RNA). HIV – associated reductions in connectivity were observed between the DLPFC and the dorsal caudate, particularly in younger participants (<50 years, N=9). Seropositive participants also demonstrated reductions in dorsal caudate connectivity to frontal and parietal brain regions previously demonstrated to be functionally connected to the DLPFC. Cognitive impairment, but none of the assessed clinical/immunological variables, was also associated with reduced frontostriatal connectivity. In conclusion, our data indicate that HIV is associated with attenuated intrinsic frontostriatal connectivity. Intrinsic connectivity of this network may therefore serve as a marker of the deleterious effects of HIV infection on the brain, possibly via HIV-associated dopaminergic abnormalities. These findings warrant independent replication in larger studies. (JINS, 2015, 21, 1–11)


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


1987 ◽  
Vol 58 (3) ◽  
pp. 496-509 ◽  
Author(s):  
A. Lev-Tov ◽  
M. Tal

The structure and activity patterns of the anterior and posterior heads of the guinea pig digastric muscle (DG) were studied in ketamine-anesthetized guinea pigs. Collagen staining of longitudinal and transverse sections of the muscle revealed that the guinea pig DG is comprised of a unicompartmental anterior head (ADG) and a multicompartmental posterior head (PDG). The two heads are separated by a thin tendinous inscription that, unlike the intermediate tendon of the DG in humans, is not attached to the hyoid bone. The motor nuclei of the guinea pig DG were reconstructed using retrograde labeling with horseradish peroxidase. The motoneurons of the ADG were clustered in a longitudinal column within the trigeminal motor nucleus. The motoneurons of the PDG were segregated into two clusters within the facial motor nucleus. The cross-sectional areas of the ADG and PDG motoneuron somata exhibited unimodal frequency distributions and the average soma area was larger for ADG than PDG motoneurons. Histochemical characterization of ADG and PDG revealed that the two muscle heads contained the three main histochemical types of muscle fibers identified in limb muscles. The frequency distribution of fiber types in ADG and PDG were not significantly different. Both muscle heads were predominantly fast with slow oxidative fibers accounting for only 1.1 and 0.3% of the fibers in narrow dorsal regions of ADG and PDG, respectively, and 13.6 and 12.9% in the more ventral regions of ADG and PDG, respectively. Simultaneous recordings of EMGs from the ADG and PDG were carried out during spontaneously occurring rhythmical jaw movements. These recordings revealed a high degree of synchrony between the activities of the two heads, although differences were observed in the onset and duration of the EMG bursts. Activity in the PDG preceded activity in the ADG in most of the rhythmical cycles and persisted longer. The differences in latencies of time-locked EMGs evoked in the ADG and PDG by four-pulse cortical stimulation were much smaller than those observed between the activity bursts of the two heads during rhythmical jaw movements. It is suggested that the early activity in the PDG is accounted for by shorter central conduction times in the pathways onto it and/or by higher recruitability of its motor units. The early activity in PDG may serve to optimize the location of ADG on its length-tension curve prior to and during the active state.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Fabian Grabenhorst ◽  
Ken-Ichiro Tsutsui ◽  
Shunsuke Kobayashi ◽  
Wolfram Schultz

Risk derives from the variation of rewards and governs economic decisions, yet how the brain calculates risk from the frequency of experienced events, rather than from explicit risk-descriptive cues, remains unclear. Here, we investigated whether neurons in dorsolateral prefrontal cortex process risk derived from reward experience. Monkeys performed in a probabilistic choice task in which the statistical variance of experienced rewards evolved continually. During these choices, prefrontal neurons signaled the reward-variance associated with specific objects (‘object risk’) or actions (‘action risk’). Crucially, risk was not derived from explicit, risk-descriptive cues but calculated internally from the variance of recently experienced rewards. Support-vector-machine decoding demonstrated accurate neuronal risk discrimination. Within trials, neuronal signals transitioned from experienced reward to risk (risk updating) and from risk to upcoming choice (choice computation). Thus, prefrontal neurons encode the statistical variance of recently experienced rewards, complying with formal decision variables of object risk and action risk.


2019 ◽  
Author(s):  
S. A. Herff ◽  
C. Herff ◽  
A. J. Milne ◽  
G. D. Johnson ◽  
J. J. Shih ◽  
...  

AbstractRhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope-tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners, to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination. Rhythm imagination was elicited by instructing participants to imagine the rhythm to continue during pauses of several repetitions. To identify electrodes whose periodicities in high-gamma activity track the periodicities in the musical rhythms, we compute the correlation between the autocorrelations (ACC) of both the musical rhythms and the neural signals. A condition in which participants listened to white noise was used to establish a baseline. High-gamma autocorrelations in auditory areas in the superior temporal gyrus and in frontal areas on both hemispheres significantly matched the autocorrelation of the musical rhythms. Overall, numerous significant electrodes are observed on the right hemisphere. Of particular interest is a large cluster of electrodes in the right prefrontal cortex that is active during both rhythm perception and imagination. This indicates conscious processing of the rhythms’ structure as opposed to mere auditory phenomena. The ACC approach clearly highlights that high-gamma activity measured from cortical electrodes tracks both attended and imagined rhythms.


2016 ◽  
Vol 113 (52) ◽  
pp. E8492-E8501 ◽  
Author(s):  
Roland G. Benoit ◽  
Daniel J. Davies ◽  
Michael C. Anderson

Imagining future events conveys adaptive benefits, yet recurrent simulations of feared situations may help to maintain anxiety. In two studies, we tested the hypothesis that people can attenuate future fears by suppressing anticipatory simulations of dreaded events. Participants repeatedly imagined upsetting episodes that they feared might happen to them and suppressed imaginings of other such events. Suppressing imagination engaged the right dorsolateral prefrontal cortex, which modulated activation in the hippocampus and in the ventromedial prefrontal cortex (vmPFC). Consistent with the role of the vmPFC in providing access to details that are typical for an event, stronger inhibition of this region was associated with greater forgetting of such details. Suppression further hindered participants’ ability to later freely envision suppressed episodes. Critically, it also reduced feelings of apprehensiveness about the feared scenario, and individuals who were particularly successful at down-regulating fears were also less trait-anxious. Attenuating apprehensiveness by suppressing simulations of feared events may thus be an effective coping strategy, suggesting that a deficiency in this mechanism could contribute to the development of anxiety.


e-Neuroforum ◽  
2018 ◽  
Vol 24 (1) ◽  
pp. A11-A18
Author(s):  
Sabine Windmann ◽  
Grit Hein

Abstract Altruism is a puzzling phenomenon, especially for Biology and Economics. Why do individuals reduce their chances to provide some of the resources they own to others? The answer to this question can be sought at ultimate or proximate levels of explanation. The Social Neurosciences attempt to specify the brain mechanisms that drive humans to act altruistically, in assuming that overtly identical behaviours can be driven by different motives. The research has shown that activations and functional connectivities of the Anterior Insula and the Temporoparietal Junction play specific roles in empathetic versus strategic forms of altruism, whereas the dorsolateral prefrontal cortex, among other regions, is involved in norm-oriented punitive forms of altruism. Future research studies could focus on the processing of ambiguity and conflict in pursuit of altruistic intentions.


Sign in / Sign up

Export Citation Format

Share Document