Different classes of audiovisual correspondences are processed at distinct levels of the cortical hierarchy

2012 ◽  
Vol 25 (0) ◽  
pp. 69
Author(s):  
Uta Noppeney ◽  
Ruth Adam ◽  
Sepideh Sadaghiani ◽  
Joost X. Maier ◽  
HweeLing Lee ◽  
...  

The brain should integrate sensory inputs only when they emanate from a common source and segregate those from different sources. Sensory correspondences are important cues informing the brain whether two sensory inputs are generated by a common event and should hence be integrated. Most prominently, sensory inputs should co-occur in time and space. More complex audiovisual stimuli may also be congruent in terms of semantics (e.g., objects and source sounds) or phonology (e.g., spoken and written words; linked via common linguistic labels). Surprisingly, metaphoric relations (e.g., pitch and height) have also been shown to influence audiovisual integration. The neural mechanisms that mediate these metaphoric congruency effects are only poorly understood. They may be mediated via (i) natural multisensory binding, (ii) common linguistic labels or (iii) semantics. In this talk, we will present a series of studies that investigate whether these different types of audiovisual correspondences are processed by distinct neural systems. Further, we investigate how those systems are employed by metaphoric audiovisual correspondences. Our results demonstrate that different classes of audiovisual correspondences influence multisensory integration at distinct levels of the cortical hierarchy. Spatiotemporal incongruency is detected already at the primary cortical level. Natural (e.g., motion direction) and phonological incongruency influences MSI in areas involved in motion or phonological processing. Critically, metaphoric interactions emerge in neural systems that are shared with natural and semantic incongruency. This activation pattern may reflect the ambivalent nature of metaphoric audiovisual interactions relying on both natural and semantic correspondences.

2021 ◽  
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

AbstractTo obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying reliability. Visual spatial reliability was smaller, comparable to and greater than that of auditory stimuli. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During audiovisual recalibration, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its final estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, first increased and then decreased, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.Author summaryAudiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical recalibration task in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the recalibration task. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, and this model is able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a cue and its final estimate. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008877
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.


2008 ◽  
Vol 105 (46) ◽  
pp. 18053-18057 ◽  
Author(s):  
Katherine M. Nautiyal ◽  
Ana C. Ribeiro ◽  
Donald W. Pfaff ◽  
Rae Silver

Mast cells are resident in the brain and contain numerous mediators, including neurotransmitters, cytokines, and chemokines, that are released in response to a variety of natural and pharmacological triggers. The number of mast cells in the brain fluctuates with stress and various behavioral and endocrine states. These properties suggest that mast cells are poised to influence neural systems underlying behavior. Using genetic and pharmacological loss-of-function models we performed a behavioral screen for arousal responses including emotionality, locomotor, and sensory components. We found that mast cell deficient KitW−sh/W−sh (sash−/−) mice had a greater anxiety-like phenotype than WT and heterozygote littermate control animals in the open field arena and elevated plus maze. Second, we show that blockade of brain, but not peripheral, mast cell activation increased anxiety-like behavior. Taken together, the data implicate brain mast cells in the modulation of anxiety-like behavior and provide evidence for the behavioral importance of neuroimmune links.


2017 ◽  
Vol 19 (3) ◽  
pp. 349-377
Author(s):  
Leonardo Niro Nascimento

This article first aims to demonstrate the different ways the work of the English neurologist John Hughlings Jackson influenced Freud. It argues that these can be summarized in six points. It is further argued that the framework proposed by Jackson continued to be pursued by twentieth-century neuroscientists such as Papez, MacLean and Panksepp in terms of tripartite hierarchical evolutionary models. Finally, the account presented here aims to shed light on the analogies encountered by psychodynamically oriented neuroscientists, between contemporary accounts of the anatomy and physiology of the nervous system on the one hand, and Freudian models of the mind on the other. These parallels, I will suggest, are not coincidental. They have a historical underpinning, as both accounts most likely originate from a common source: John Hughlings Jackson's tripartite evolutionary hierarchical view of the brain.


2020 ◽  
Author(s):  
Bahar Tunçgenç ◽  
Carolyn Koch ◽  
Amira Herstic ◽  
Inge-Marie Eigsti ◽  
Stewart Mostofsky

AbstractMimicry facilitates social bonding throughout the lifespan. Mimicry impairments in autism spectrum conditions (ASC) are widely reported, including differentiation of the brain networks associated with its social bonding and learning functions. This study examined associations between volumes of brain regions associated with social bonding versus procedural skill learning, and mimicry of gestures during a naturalistic interaction in ASC and neurotypical (NT) children. Consistent with predictions, results revealed reduced mimicry in ASC relative to the NT children. Mimicry frequency was negatively associated with autism symptom severity. Mimicry was predicted predominantly by the volume of procedural skill learning regions in ASC, and by bonding regions in NT. Further, bonding regions contributed significantly less to mimicry in ASC than in NT, while the contribution of learning regions was not different across groups. These findings suggest that associating mimicry with skill learning, rather than social bonding, may partially explain observed communication difficulties in ASC.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2015 ◽  
Author(s):  
Manivannan Subramaniyan ◽  
Alexander S. Ecker ◽  
Saumil S. Patel ◽  
R. James Cotton ◽  
Matthias Bethge ◽  
...  

AbstractWhen the brain has determined the position of a moving object, due to anatomical and processing delays, the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and the real position of a moving object. A well-known visual illusion — the flash lag effect — points towards such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. Towards this, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash — as required by the postdiction/motion-biasing model — may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli.New and NoteworthyWe report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Adrian Ponce-Alvarez ◽  
Gabriela Mochol ◽  
Ainhoa Hermoso-Mendizabal ◽  
Jaime de la Rocha ◽  
Gustavo Deco

Previous research showed that spontaneous neuronal activity presents sloppiness: the collective behavior is strongly determined by a small number of parameter combinations, defined as ‘stiff’ dimensions, while it is insensitive to many others (‘sloppy’ dimensions). Here, we analyzed neural population activity from the auditory cortex of anesthetized rats while the brain spontaneously transited through different synchronized and desynchronized states and intermittently received sensory inputs. We showed that cortical state transitions were determined by changes in stiff parameters associated with the activity of a core of neurons with low responses to stimuli and high centrality within the observed network. In contrast, stimulus-evoked responses evolved along sloppy dimensions associated with the activity of neurons with low centrality and displaying large ongoing and stimulus-evoked fluctuations without affecting the integrity of the network. Our results shed light on the interplay among stability, flexibility, and responsiveness of neuronal collective dynamics during intrinsic and induced activity.


2019 ◽  
Author(s):  
Adrián Ponce-Alvarez ◽  
Gabriela Mochol ◽  
Ainhoa Hermoso-Mendizabal ◽  
Jaime de la Rocha ◽  
Gustavo Deco

SummaryPrevious research showed that spontaneous neuronal activity presents sloppiness: the collective behavior is strongly determined by a small number of parameter combinations, defined as “stiff” dimensions, while it is insensitive to many others (“sloppy” dimensions). Here, we analyzed neural population activity from the auditory cortex of anesthetized rats while the brain spontaneously transited through different synchronized and desynchronized states and intermittently received sensory inputs. We showed that cortical state transitions were determined by changes in stiff parameters associated with the activity of a core of neurons with low responses to stimuli and high centrality within the observed network. In contrast, stimulus-evoked responses evolved along sloppy dimensions associated with the activity of neurons with low centrality and displaying large ongoing and stimulus-evoked fluctuations without affecting the integrity of the network. Our results shed light on the interplay among stability, flexibility, and responsiveness of neuronal collective dynamics during intrinsic and induced activity.


2019 ◽  
pp. 286-303 ◽  
Author(s):  
Rebecca Alexander ◽  
Justine Megan Gatt

Resilience refers to the process of adaptive recovery following adversity or trauma. It is likely to include an intertwined series of dynamic interactions between neural, developmental, environmental, genetic, and epigenetic factors over time. Neuroscientific research suggests the potential role of the brain’s threat and reward systems, as well as executive control networks. Developmental research provides insight into how the environment may affect these neural systems across the lifespan towards greater risk or resilience to stress. Genetic work has revealed numerous targets that alter key neurochemical systems in the brain to influence mental health. Current challenges include ambiguities in the definition and measurement of resilience and a simplified focus on resilience as the absence of psychopathology, irrespective of levels of positive mental functioning. Greater emphasis on understanding the protective aspects of resilience and related well-being outcomes are important to delineate the unique neurobiological factors that underpin this process, so that effective interventions can be developed to assist vulnerable populations and resilience promotion.


Sign in / Sign up

Export Citation Format

Share Document