perceptual inference
Recently Published Documents


TOTAL DOCUMENTS

123
(FIVE YEARS 56)

H-INDEX

24
(FIVE YEARS 4)

2022 ◽  
Vol 28 ◽  
pp. 100229
Author(s):  
Natsuki Ueda ◽  
Kanji Tanaka ◽  
Kazushi Maruo ◽  
Neil Roach ◽  
Tomiki Sumiyoshi ◽  
...  

PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2021 ◽  
Vol 20 (1) ◽  
pp. 86-110
Author(s):  
Mircea Valeriu Deaca

Abstract In the framework of predictive coding, as explained by Giovanni Pezzulo in his article Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference (2014), humans construct instances of emotions by a double arrow of explanation of stimuli. Top-down cognitive models explain in a predictive fashion the emotional value of stimuli. At the same time, feelings and emotions depend on the perception of internal changes in the body. When confronted with uncertain auditory and visual information, a multimodal internal state assigns more weight to interoceptive information (rather than auditory and visual information) like visceral and autonomic states as hunger or thirst (motivational conditions). In short, an emotional mood can constrain the construction of a particular instance of emotion. This observation suggests that the dynamics of generative processes of Bayesian inference contain a mechanism of bidirectional link between perceptual and cognitive inference and feelings and emotions. In other words, “subjective feeling states and emotions influence perceptual and cognitive inference, which in turn produce new subjective feeling states and emotions” as a self-fulfilling prophecy (Pezzulo 2014, 908). This article focuses on the short introductory scene from Steven Spielberg’s Jaws (1975), claiming that the construction / emergence of the fear and sadness emotions are created out of the circular causal coupling instantiated between cinematic bottom-up mood cues and top-down cognitive explanations.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lucas Bohlen ◽  
Robert Shaw ◽  
Francesco Cerritelli ◽  
Jorge E. Esteves

Globally, mental and musculoskeletal disorders present with high prevalence, disease burden, and comorbidity. In order to improve the quality of care for patients with persistent physical and comorbid mental health conditions, person-centered care approaches addressing psychosocial factors are currently advocated. Central to successful person-centered care is a multidisciplinary collaboration between mental health and musculoskeletal specialists underpinned by a robust therapeutic alliance. Such a collaborative approach might be found in osteopathy, which is typically utilized to treat patients with musculoskeletal disorders but may arguably also benefit mental health outcomes. However, research and practice exploring the reputed effect of osteopathy on patients with mental health problems lack a robust framework. In this hypothesis and theory article, we build upon research from embodied cognition, predictive coding, interoception, and osteopathy to propose an embodied, predictive and interoceptive framework that underpins osteopathic person-centered care for individuals with persistent physical and comorbid mental health problems. Based on the premise that, for example, chronic pain and comorbid depression are underlined by overly precise predictions or imprecise sensory information, we hypothesize that osteopathic treatment may generate strong interoceptive prediction errors that update the generative model underpinning the experience of pain and depression. Thus, physical and mental symptoms may be reduced through active and perceptual inference. We discuss how these theoretical perspectives can inform future research into osteopathy and mental health to reduce the burden of comorbid psychological factors in patients with persistent physical symptoms and support person-centered multidisciplinary care in mental health.


2021 ◽  
Author(s):  
Enrico Fucci ◽  
Arnaud Poublan-couzardot ◽  
Oussama Abdoun ◽  
Antoine Lutz

The auditory mismatch negativity (MMN) is a well characterized event-related potential component which has gained recent attention in theoretical models describing the impact of various styles of mindfulness meditation on attentional processes and perceptual inference. Previous findings highlighted a differential modulation of the MMN amplitude by meditation states and degrees of expertise. In the present study, we attempted to replicate results from the recent literature with a data sample that allowed for increased statistical power compared to previous experiments. Relying on traditional frequentist analysis, we found no effects of meditation states and expertise on the auditory MMN amplitude, non-replicating our previous work (Fucci et al., 2018). Using a Bayesian approach, we found strong evidence against an interaction effect on the MMN amplitude between expertise groups and meditation states and only moderate evidence in favour of a weak effect of expertise during focused attention practice. On the other hand, we replicated previous evidence of increased alpha oscillatory power during meditation practices compared to a control state. We discuss our null findings in relation to factors that could undermine the replicability of previous research on this subject, namely low statistical power, use of flexible analysis methods and a possible publication bias leading to a misrepresentation of the available evidence.


2021 ◽  
Author(s):  
Veith Andreas Weilnhammer ◽  
Heiner Stuke ◽  
Anna-Lena Eckert ◽  
Kai Standvoss ◽  
Philipp Sterzer

Perception cycles through periods of enhanced and reduced sensitivity to external information. Here, we asked whether such infra-slow oscillations arise as a noise-related epiphenomenon of limited processing capacity or, alternatively, represent a structured mechanism of perceptual inference. Using two large-scale datasets, we found that humans and mice waver between alternating intervals of externally- and internally-oriented modes of sensory analysis. During external mode, perception was more sensitive to external sensory information, whereas internal mode was characterized by enhanced biases toward perceptual history. Computational modeling indicated that dynamic changes in mode are governed by two interlinked factors: (i), the integration of subsequent stimuli over time and, (ii), infra-slow anti-phase oscillations in the perceptual impact of external sensory information versus internal predictions that are provided by perceptual history. Between-mode fluctuations may benefit perception by enabling the generation of stable representations of the environment despite an ongoing stream of noisy sensory inputs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zsófia Pálffy ◽  
Kinga Farkas ◽  
Gábor Csukly ◽  
Szabolcs Kéri ◽  
Bertalan Polner

AbstractIt is a widely held assumption that the brain performs perceptual inference by combining sensory information with prior expectations, weighted by their uncertainty. A distinction can be made between higher- and lower-level priors, which can be manipulated with associative learning and sensory priming, respectively. Here, we simultaneously investigate priming and the differential effect of auditory vs. visual associative cues on visual perception, and we also examine the reliability of individual differences. Healthy individuals (N = 29) performed a perceptual inference task twice with a one-week delay. They reported the perceived direction of motion of dot pairs, which were preceded by a probabilistic visuo-acoustic cue. In 30% of the trials, motion direction was ambiguous, and in half of these trials, the auditory versus the visual cue predicted opposing directions. Cue-stimulus contingency could change every 40 trials. On ambiguous trials where the visual and the auditory cue predicted conflicting directions of motion, participants made more decisions consistent with the prediction of the acoustic cue. Increased predictive processing under stimulus uncertainty was indicated by slower responses to ambiguous (vs. non-ambiguous) stimuli. Furthermore, priming effects were also observed in that perception of ambiguous stimuli was influenced by perceptual decisions on the previous ambiguous and unambiguous trials as well. Critically, behavioural effects had substantial inter-individual variability which showed high test–retest reliability (intraclass correlation coefficient (ICC) > 0.78). Overall, higher-level priors based on auditory (vs. visual) information had greater influence on visual perception, and lower-level priors were also in action. Importantly, we observed large and stable differences in various aspects of task performance. Computational modelling combined with neuroimaging could allow testing hypotheses regarding the potential mechanisms causing these behavioral effects. The reliability of the behavioural differences implicates that such perceptual inference tasks could be valuable tools during large-scale biomarker and neuroimaging studies.


2021 ◽  
Vol 15 ◽  
Author(s):  
Flora M. Antunes ◽  
Manuel S. Malmierca

The corticothalamic (CT) pathways emanate from either Layer 5 (L5) or 6 (L6) of the neocortex and largely outnumber the ascending, thalamocortical pathways. The CT pathways provide the anatomical foundations for an intricate, bidirectional communication between thalamus and cortex. They act as dynamic circuits of information transfer with the ability to modulate or even drive the response properties of target neurons at each synaptic node of the circuit. L6 CT feedback pathways enable the cortex to shape the nature of its driving inputs, by directly modulating the sensory message arriving at the thalamus. L5 CT pathways can drive the postsynaptic neurons and initiate a transthalamic corticocortical circuit by which cortical areas communicate with each other. For this reason, L5 CT pathways place the thalamus at the heart of information transfer through the cortical hierarchy. Recent evidence goes even further to suggest that the thalamus via CT pathways regulates functional connectivity within and across cortical regions, and might be engaged in cognition, behavior, and perceptual inference. As descending pathways that enable reciprocal and context-dependent communication between thalamus and cortex, we venture that CT projections are particularly interesting in the context of hierarchical perceptual inference formulations such as those contemplated in predictive processing schemes, which so far heavily rely on cortical implementations. We discuss recent proposals suggesting that the thalamus, and particularly higher order thalamus via transthalamic pathways, could coordinate and contextualize hierarchical inference in cortical hierarchies. We will explore these ideas with a focus on the auditory system.


Author(s):  
Alexander Pastukhov ◽  
Claus-Christian Carbon

AbstractWe investigated how changes in dynamic spatial context influence visual perception. Specifically, we reexamined the perceptual coupling phenomenon when two multistable displays viewed simultaneously tend to be in the same dominant state and switch in accord. Current models assume this interaction reflecting mutual bias produced by a dominant perceptual state. In contrast, we demonstrate that influence of spatial context is strongest when perception changes. First, we replicated earlier work using bistable kinetic-depth effect displays, then extended it by employing asynchronous presentation to show that perceptual coupling cannot be accounted for by the static context provided by perceptually dominant states. Next, we demonstrated that perceptual coupling reflects transient bias induced by perceptual change, both in ambiguous and disambiguated displays. We used a hierarchical Bayesian model to characterize its timing, demonstrating that the transient bias is induced 50–70 ms after the exogenous trigger event and decays within ~200–300 ms. Both endogenous and exogenous switches led to quantitatively and qualitatively similar perceptual consequences, activating similar perceptual reevaluation mechanisms within a spatial surround. We explain how they can be understood within a transient selective visual attention framework or using local lateral connections within sensory representations. We suggest that observed perceptual effects reflect general mechanisms of perceptual inference for dynamic visual scene perception.


Sign in / Sign up

Export Citation Format

Share Document