multisensory perception
Recently Published Documents


TOTAL DOCUMENTS

194
(FIVE YEARS 74)

H-INDEX

24
(FIVE YEARS 4)

PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2021 ◽  
Author(s):  
Ingrid Hoelzl ◽  
Rémi Marie

Western humanism has established a reifying and predatory relation to the world. While its collateral visual regime, the perspectival image, is still saturating our screens, this relation has reached a dead end. Rather than desperately turning towards transhumanism and geoengineering, we need to readjust our position within community Earth. Facing this predicament, Ingrid Hoelzl and Rémi Marie develop the notion of the common image - understood as a multisensory perception across species; and common ethics - a comportment that transcends species-bound ways of living. Highlighting the notion of the common as opposed to the immune, the authors ultimately advocate otherness as a common ground for a larger than human communism.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer ◽  
Christoph Kayser

AbstractTo organize the plethora of sensory signals from our environment into a coherent percept, our brain relies on the processes of multisensory integration and sensory recalibration. We here asked how visuo-proprioceptive integration and recalibration are shaped by the presence of more than one visual stimulus, hence paving the way to study multisensory perception under more naturalistic settings with multiple signals per sensory modality. We used a cursor-control task in which proprioceptive information on the endpoint of a reaching movement was complemented by two visual stimuli providing additional information on the movement endpoint. The visual stimuli were briefly shown, one synchronously with the hand reaching the movement endpoint, the other delayed. In Experiment 1, the judgments of hand movement endpoint revealed integration and recalibration biases oriented towards the position of the synchronous stimulus and away from the delayed one. In Experiment 2 we contrasted two alternative accounts: that only the temporally more proximal visual stimulus enters integration similar to a winner-takes-all process, or that the influences of both stimuli superpose. The proprioceptive biases revealed that integration—and likely also recalibration—are shaped by the superposed contributions of multiple stimuli rather than by only the most powerful individual one.


2021 ◽  
pp. 174702182110503
Author(s):  
Elena Panagiotopoulou ◽  
Laura Crucianelli ◽  
Alessandra Lemma ◽  
Katerina Fotopoulou

People tend to evaluate their own traits and abilities favourably and such favourable self-perceptions extend to attractiveness. However, the exact mechanism underlying this self-enhancement bias remains unclear. One possibility could be the identification with attractive others through blurring of self-other boundaries. Across two experiments, we used the enfacement illusion to investigate the effect of others’ attractiveness in the multisensory perception of the self. In Experiment 1 (N=35), participants received synchronous or asynchronous interpersonal visuo-tactile stimulation with an attractive and non-attractive face. In Experiment 2 (N=35), two new faces were used and spatial incοngruency was introduced as a control condition. The results showed that increased ratings of attractiveness of an unfamiliar face lead to blurring of self-other boundaries, allowing the identification of our psychological self with another's physical self and, specifically, their face, and this seems to be unrelated to perceived own attractiveness. The effect of facial attractiveness on face ownership showed dissociable mechanisms, with multisensory integration modulating the effect on similarity but not identification, an effect that may be purely based on vision. Overall, our findings suggest that others’ attractiveness may lead to positive distortions of the self. This research provides a psychophysical starting point for studying the impact of others' attractiveness on self-face recognition, which can be particularly important for individuals with malleable, embodied self-other boundaries and body image disturbances.


2021 ◽  
Vol 21 (9) ◽  
pp. 2692
Author(s):  
Shea E. Duarte ◽  
Joy Geng

Author(s):  
Yi Lin ◽  
Hongwei Ding ◽  
Yang Zhang

Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599


2021 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

Whether two sensory cues interact during perceptual judgments depends on their immediate properties, but as suggested by Bayesian models, also on the observer's a priori belief that these originate from a common source. While in many experiments this a priori belief is considered fixed, in real life it must adapt to the momentary context or environment. To understand the adaptive nature of human multisensory perception we investigated the context-sensitivity of spatial judgements in a ventriloquism paradigm. We exposed observers to audio-visual stimuli whose discrepancy either varied over a wider (±46°) or a narrower range (±26°) and hypothesized that exposure to a wider range of discrepancies would facilitate multisensory binding by increasing participants a priori belief about a common source for a given discrepancy. Our data support this hypothesis by revealing an enhanced integration (ventriloquism) bias in the wider context, which was echoed in Bayesian causal inference models fit to participants' data, which assigned a stronger a priori integration tendency during the wider context. Interestingly, the immediate ventriloquism aftereffect, a multisensory response bias obtained following a multisensory test trial, was not affected by the contextual manipulation, although participants' confidence in their spatial judgments differed between contexts for both integration and recalibration trials. These results highlight the context-sensitivity of multisensory binding and suggest that the immediate ventriloquism aftereffect is not a purely sensory-level consequence of the multisensory integration process.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Fatmah Abdulrahman Baothman

Artificial intelligence (AI) is progressively changing techniques of teaching and learning. In the past, the objective was to provide an intelligent tutoring system without intervention from a human teacher to enhance skills, control, knowledge construction, and intellectual engagement. This paper proposes a definition of AI focusing on enhancing the humanoid agent Nao’s learning capabilities and interactions. The aim is to increase Nao intelligence using big data by activating multisensory perceptions such as visual and auditory stimuli modules and speech-related stimuli, as well as being in various movements. The method is to develop a toolkit by enabling Arabic speech recognition and implementing the Haar algorithm for robust image recognition to improve the capabilities of Nao during interactions with a child in a mixed reality system using big data. The experiment design and testing processes were conducted by implementing an AI principle design, namely, the three-constituent principle. Four experiments were conducted to boost Nao’s intelligence level using 100 children, different environments (class, lab, home, and mixed reality Leap Motion Controller (LMC). An objective function and an operational time cost function are developed to improve Nao’s learning experience in different environments accomplishing the best results in 4.2 seconds for each number recognition. The experiments’ results showed an increase in Nao’s intelligence from 3 to 7 years old compared with a child’s intelligence in learning simple mathematics with the best communication using a kappa ratio value of 90.8%, having a corpus that exceeded 390,000 segments, and scoring 93% of success rate when activating both auditory and vision modules for the agent Nao. The developed toolkit uses Arabic speech recognition and the Haar algorithm in a mixed reality system using big data enabling Nao to achieve a 94% success learning rate at a distance of 0.09 m; when using LMC in mixed reality, the hand sign gestures recorded the highest accuracy of 98.50% using Haar algorithm. The work shows that the current work enabled Nao to gradually achieve a higher learning success rate as the environment changes and multisensory perception increases. This paper also proposes a cutting-edge research work direction for fostering child-robots education in real time.


2021 ◽  
pp. 110598
Author(s):  
Christina Dietz ◽  
David Cook ◽  
Colin Wilson ◽  
Pedro Oliveira ◽  
Rebecca Ford

Sign in / Sign up

Export Citation Format

Share Document