crossmodal integration
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 6)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
pp. 174702182110480
Author(s):  
Hirokazu Doi ◽  
Kazuki Yamaguchi ◽  
Shoma Sugisaki

Timbre is an integral dimension of musical sound quality, and people accumulate knowledge about timbre of sounds generated by various musical instruments throughout their life. Recent studies have proposed the possibility that musical sound is crossmodally integrated with visual information related to the sound. However, little is known about the influence of visual information on musical timbre perception. The present study investigated the automaticity of crossmodal integration between musical timbre and visual image of hands playing musical instruments. In the experiment, an image of hands playing piano or violin, or a control scrambled image was presented to participants unconsciously. Simultaneously, participants heard intermediate sounds synthesised by morphing piano and violin sounds with the same note. The participants answered whether the musical tone sounded like piano or violin. The results revealed that participants were more likely to perceive violin sound when an image of a violin was presented unconsciously than when playing piano was presented. This finding indicates that timbral perception of musical sound is influenced by visual information of musical performance without conscious awareness, supporting the automaticity of crossmodal integration in musical timbre perception.



2021 ◽  
Vol 12 ◽  
Author(s):  
Ting Lu ◽  
Jingjing Yang ◽  
Xinyu Zhang ◽  
Zihan Guo ◽  
Shengnan Li ◽  
...  

Depression is related to the defect of emotion processing, and people's emotional processing is crossmodal. This article aims to investigate whether there is a difference in audiovisual emotional integration between the depression group and the normal group using a high-resolution event-related potential (ERP) technique. We designed a visual and/or auditory detection task. The behavioral results showed that the responses to bimodal audiovisual stimuli were faster than those to unimodal auditory or visual stimuli, indicating that crossmodal integration of emotional information occurred in both the depression and normal groups. The ERP results showed that the N2 amplitude induced by sadness was significantly higher than that induced by happiness. The participants in the depression group showed larger amplitudes of N1 and P2, and the average amplitude of LPP evoked in the frontocentral lobe in the depression group was significantly lower than that in the normal group. The results indicated that there are different audiovisual emotional processing mechanisms between depressed and non-depressed college students.





2021 ◽  
Author(s):  
Michael Maksimowski

n addition to auditory information, music perception often involves visual and vibrotactile information, making it an ideal domain through which to study cross-modal integration. Recent research has demonstrated a strong influence of visual information on auditory judgments concerning music. However, we have very little empirical information regarding integration of vibrotactile information in music. In Experiment 1, participants made judgments of interval size for unimodal presentations of melodic intervals in auditory, visual, and vibrotactile conditions. In Experiment 2, participants made judgments of interval size for cross-modal presentations of intervals comprised of stimuli presented in the three unimodal conditions of Experiment 1. In Experiment 3, participants were trained with vibrotactile stimuli to assess if learning benefits audio-vibrotactile integration in music perception. The results are discussed in light of differences in the extent of visual and vibrotactile influence on auditory judgments and the role of learning in cross-modal integration in music.



2021 ◽  
Author(s):  
Michael Maksimowski

n addition to auditory information, music perception often involves visual and vibrotactile information, making it an ideal domain through which to study cross-modal integration. Recent research has demonstrated a strong influence of visual information on auditory judgments concerning music. However, we have very little empirical information regarding integration of vibrotactile information in music. In Experiment 1, participants made judgments of interval size for unimodal presentations of melodic intervals in auditory, visual, and vibrotactile conditions. In Experiment 2, participants made judgments of interval size for cross-modal presentations of intervals comprised of stimuli presented in the three unimodal conditions of Experiment 1. In Experiment 3, participants were trained with vibrotactile stimuli to assess if learning benefits audio-vibrotactile integration in music perception. The results are discussed in light of differences in the extent of visual and vibrotactile influence on auditory judgments and the role of learning in cross-modal integration in music.



2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Hongwei Tan ◽  
Yifan Zhou ◽  
Quanzheng Tao ◽  
Johanna Rosen ◽  
Sebastiaan van Dijken

AbstractThe integration and interaction of vision, touch, hearing, smell, and taste in the human multisensory neural network facilitate high-level cognitive functionalities, such as crossmodal integration, recognition, and imagination for accurate evaluation and comprehensive understanding of the multimodal world. Here, we report a bioinspired multisensory neural network that integrates artificial optic, afferent, auditory, and simulated olfactory and gustatory sensory nerves. With distributed multiple sensors and biomimetic hierarchical architectures, our system can not only sense, process, and memorize multimodal information, but also fuse multisensory data at hardware and software level. Using crossmodal learning, the system is capable of crossmodally recognizing and imagining multimodal information, such as visualizing alphabet letters upon handwritten input, recognizing multimodal visual/smell/taste information or imagining a never-seen picture when hearing its description. Our multisensory neural network provides a promising approach towards robotic sensing and perception.



2020 ◽  
Vol 13 ◽  
Author(s):  
Zhao Zhang ◽  
Weiqi He ◽  
Yuchen Li ◽  
Mingming Zhang ◽  
Wenbo Luo


2019 ◽  
Author(s):  
Margaret Gullick ◽  
James R. Booth

Crossmodal integration is a critical component of successful reading, and yet it has been less studied than reading’s unimodal subskills. Proficiency with the sounds of a language (i.e., the phonemes) and with the visual representations of these sounds (graphemes) are both important and necessary precursors for reading, but the formation of a stable integrated representation that combines and links these aspects, and subsequent fluent and automatic access to this crossmodal representation, is unique to reading and is required for its success. Indeed, individuals with specific difficulties in reading, as in dyslexia, demonstrate impairments not only in phonology and orthography but also in integration. Impairments in only crossmodal integration could result in disordered reading via disrupted formation of or access to phoneme–grapheme associations. Alternately, the phonological deficits noted in many individuals with dyslexia may lead to reading difficulties via issues with integration: children who cannot consistently identify and manipulate the sounds of their language will also have trouble matching these sounds to their visual representations, resulting in the manifested deficiencies. We here discuss the importance of crossmodal integration in reading, both generally and as a potential specific causal deficit in the case of dyslexia. We examine the behavioral, functional, and structural neural evidence for a crossmodal, as compared to unimodal, processing issue in individuals with dyslexia in comparison to typically developing controls. We then present an initial review of work using crossmodal- versus unimodal-based reading interventions and training programs aimed at the amelioration of reading difficulties. Finally, we present some remaining questions reflecting potential areas for future research into this topic.



2019 ◽  
Author(s):  
Focko L. Higgen ◽  
Philipp Ruppel ◽  
Michael Görner ◽  
Matthias Kerzel ◽  
Norman Hendrich ◽  
...  

AbstractThe quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age.To research to which degree impediments of these two abilities contribute to the age-related decline and to evaluate how this might apply to artificial systems, we replicate a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks. We explore the perception of each modality in isolation as well as the crossmodal integration.We show that in an artificial system the integration of complex high-level unimodal features outperforms the comparison of independent unimodal classifications at low stimulus intensities where errors frequently occur. In comparison to humans, the artificial system outperforms older participants in the unimodal as well as the crossmodal condition. However, compared to younger participants, the artificial system performs worse at low stimulus intensities. Younger participants seem to employ more efficient crossmodal integration mechanisms than modelled in the proposed artificial neural networks.Our work creates a bridge between neurological research and embodied artificial neurocognitive systems and demonstrates how collaborative research might help to derive hypotheses from the allied field. Our results indicate that empirically-derived neurocognitive models can inform the design of future neurocomputational architectures. For crossmodal processing, sensory integration on lower hierarchical levels, as suggested for efficient processing in the human brain, seems to improve the performance of artificial neural networks.



2019 ◽  
Author(s):  
Focko L. Higgen ◽  
Charlotte Heine ◽  
Lutz Krawinkel ◽  
Florian Göschl ◽  
Andreas K. Engel ◽  
...  

AbstractOne of the pivotal challenges of aging is to maintain independence in the activities of daily life. In order to adapt to changes in the environment, it is crucial to continuously process and accurately combine simultaneous input from different sensory systems, i.e., crossmodal integration.With aging, performance decreases in multiple cognitive domains. The processing of sensory stimuli constitutes one of the key features of this deterioration. Age-related sensory impairments affect all modalities, substantiated by decreased acuity in visual, auditory or tactile detection tasks.However, whether this decline of sensory processing leads to impairments in crossmodal integration remains an unresolved question. While some researchers propose that crossmodal integration degrades with age, others suggest that it is conserved or even gains compensatory importance.To address this question, we compared behavioral performance of older and young participants in a well-established crossmodal matching task, requiring the evaluation of congruency in simultaneously presented visual and tactile patterns. Older participants performed significantly worse than young controls in the crossmodal task when being stimulated at their individual unimodal visual and tactile perception thresholds. Performance increased with adjustment of stimulus intensities. This improvement was driven by better detection of congruent stimulus pairs (p<0.01), while detection of incongruent pairs was not significantly enhanced (p=0.12).These results indicate that age-related impairments lead to poor performance in complex crossmodal scenarios and demanding cognitive tasks. Performance is enhanced when inputs to the visual and tactile systems are congruent. Congruency effects might therefore be used to develop strategies for cognitive training and neurological rehabilitation.



Sign in / Sign up

Export Citation Format

Share Document