multisensory integration
Recently Published Documents


TOTAL DOCUMENTS

857
(FIVE YEARS 219)

H-INDEX

73
(FIVE YEARS 7)

2022 ◽  
Vol 14 ◽  
Author(s):  
Miguel Skirzewski ◽  
Stéphane Molotchnikoff ◽  
Luis F. Hernandez ◽  
José Fernando Maya-Vetencourt

In the mammalian brain, information processing in sensory modalities and global mechanisms of multisensory integration facilitate perception. Emerging experimental evidence suggests that the contribution of multisensory integration to sensory perception is far more complex than previously expected. Here we revise how associative areas such as the prefrontal cortex, which receive and integrate inputs from diverse sensory modalities, can affect information processing in unisensory systems via processes of down-stream signaling. We focus our attention on the influence of the medial prefrontal cortex on the processing of information in the visual system and whether this phenomenon can be clinically used to treat higher-order visual dysfunctions. We propose that non-invasive and multisensory stimulation strategies such as environmental enrichment and/or attention-related tasks could be of clinical relevance to fight cerebral visual impairment.


Author(s):  
Elke B. Lange ◽  
Jens Fünderich ◽  
Hartmut Grimm

AbstractWe investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.


2022 ◽  
Author(s):  
Didem Katircilar ◽  
Funda Yildirim

Multisensory integration refers to the integration of multiple senses by the nervous system. Auditory andtactile features are closely related senses as can be understood from the fact that adjectives such as soft,rough, and warm are used commonly for auditory and tactile features. Previous studies show that auditorycues play an important role to assess the roughness of a surface. Different characteristics of auditory cuessuch as amplitude and frequency may cause perceiving surface rougher or smoother. In this study, weinvestigate the effects of harmonic and inharmonic sounds on roughness perception to examine whetherauditory roughness will affect the tactile roughness perception while they are presented simultaneously.We expected the participants to perceive surfaces rougher while they listen to inharmonic sounds due toauditory roughness. We presented simultaneous and sequential harmonic and inharmonic sounds withthree sandpapers with different roughness levels (P100, P120, P 150 grit numbers) to the participants. Wefound that participants perceive sandpaper with the P120 grit number rougher while they listen tosimultaneous inharmonic sounds than simultaneous harmonic sounds. However, any effect of harmonicityon the sandpapers with P100 and P150 grit numbers was not observed. We suggest that auditoryroughness may enhance tactile roughness perception for surfaces with particular roughness levels,possibly when the roughness estimation from the tactile sense remains ambiguous.


2022 ◽  
pp. 1-29
Author(s):  
Andrew R. Wagner ◽  
Megan J. Kobel ◽  
Daniel M. Merfeld

Abstract In an effort to characterize the factors influencing the perception of self-motion rotational cues, vestibular self-motion perceptual thresholds were measured in 14 subjects for rotations in the roll and pitch planes, as well as in the planes aligned with the anatomic orientation of the vertical semicircular canals (i.e., left anterior, right posterior; LARP, and right anterior, left posterior; RALP). To determine the multisensory influence of concurrent otolith cues, within each plane of motion, thresholds were measured at four discrete frequencies for rotations about earth-horizontal (i.e., tilts; EH) and earth-vertical axes (i.e., head positioned in the plane of the rotation; EV). We found that the perception of rotations, stimulating primarily the vertical canals, was consistent with the behavior of a high-pass filter for all planes of motion, with velocity thresholds increasing at lower frequencies of rotation. In contrast, tilt (i.e, EH rotation) velocity thresholds, stimulating both the canals and otoliths (i.e., multisensory integration), decreased at lower frequencies and were significantly lower than earth-vertical rotation thresholds at each frequency below 2 Hz. These data suggest that multisensory integration of otolithic gravity cues with semicircular canal rotation cues enhances perceptual precision for tilt motions at frequencies below 2 Hz. We also showed that rotation thresholds, at least partially, were dependent on the orientation of the rotation plane relative to the anatomical alignment of the vertical canals. Collectively these data provide the first comprehensive report of how frequency and axis of rotation influence perception of rotational self-motion cues stimulating the vertical canals.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261060
Author(s):  
Sofia Sacchetti ◽  
Francis McGlone ◽  
Valentina Cazzato ◽  
Laura Mirams

Affective touch refers to the emotional and motivational facets of tactile sensation and has been linked to the activation of a specialised system of mechanosensory afferents (the CT system), that respond optimally to slow caress-like touch. Affective touch has been shown to play an important role in the building of the bodily self: the multisensory integrated global awareness of one’s own body. Here we investigated the effects of affective touch on subsequent tactile awareness and multisensory integration using the Somatic Signal Detection Task (SSDT). During the SSDT, participants were required to detect near-threshold tactile stimulation on their cheek, in the presence/absence of a concomitant light. Participants repeated the SSDT twice, before and after receiving a touch manipulation. Participants were divided into two groups: one received affective touch (CT optimal; n = 32), and the second received non-affective touch (non-CT optimal; n = 34). Levels of arousal (skin conductance levels, SCLs) and mood changes after the touch manipulation were also measured. Affective touch led to an increase in tactile accuracy, as indicated by less false reports of touch and a trend towards higher tactile sensitivity during the subsequent SSDT. Conversely, non-affective touch was found to induce a partial decrease in the correct detection of touch possibly due to a desensitization of skin mechanoreceptors. Both affective and non-affective touch induced a more positive mood and higher SCLs in participants. The increase in SCLs was greater after affective touch. We conclude that receiving affective touch enhances the sense of bodily self therefore increasing perceptual accuracy and awareness. Higher SCLs are suggested to be a possible mediator linking affective touch to a greater tactile accuracy. Clinical implications are discussed.


2021 ◽  
Author(s):  
Vincent van de Ven ◽  
Guyon Kleuters ◽  
Joey Stuiver

We memorize our daily life experiences, which are often multisensory in nature, by segmenting them into distinct event models, in accordance with perceived contextual or situational changes. However, very little is known about how multisensory integration affects segmentation, as most studies have focused on unisensory (visual or audio) segmentation. In three experiments, we investigated the effect of multisensory integration on segmentation in memory and perception. In Experiment 1, participants encoded lists of visual objects while audio and visual contexts changed synchronously or asynchronously. After each list, we tested recognition and temporal associative memory for pictures that were encoded in the same audio-visual context or that crossed a synchronous or an asynchronous multisensory change. We found no effect of multisensory integration for recognition memory: Synchronous and asynchronous changes similarly impaired recognition for pictures encoded at those changes, compared to pictures encoded further away from those changes. Multisensory integration did affect temporal associative memory, which was worse for pictures encoded at synchronous than at asynchronous changes. Follow up experiments showed that this effect was not due to the higher complexity of multisensory over unisensory contexts (Experiment 2), nor that it was due to the temporal unpredictability of contextual changes inherent to Experiment 1 (Experiment 3). We argue that participants formed situational expectations through multisensory integration, such that synchronous multisensory changes deviated more strongly from those expectations than asynchronous changes. We discuss our findings in light of supportive and conflicting findings of uni- and multisensory segmentation.


2021 ◽  
pp. 214-234
Author(s):  
Renee Timmers

This chapter explores the insights that research into cross-modal correspondences and multisensory integration offer to our understanding and investigation of tempo and timing in music performance. As tempo and timing are generated through action, actions and sensory modalities are coupled in performance and form a multimodal unit of intention. This coupled intention is likely to demonstrate characteristics of cross-modal correspondences, linking movement and sound. Testable properties predictions are offered by research into cross-modal correspondences that have so far mainly found confirmation in controlled perceptual experiments. For example, fast tempo is predicted to be linked to smaller movement that is higher in space. Confirmation in the context of performance is complicated by interacting associations with intentions related to e.g. dynamics and energy, which can be addressed through appropriate experimental manipulation. This avenue of research highlights the close association between action and cross-modality, conceiving action as a source of cross-modal correspondences as well as indicating the cross-modal basis of actions. For timing and tempo concepts, action and cross-modality offer concrete and embodied modalities of expression.


Sign in / Sign up

Export Citation Format

Share Document