crossmodal perception
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Sandra Courrèges ◽  
Rim Aboulaasri ◽  
Anjali Bhatara ◽  
Marie-Héloïse Bardel

In the present series of studies, we investigated crossmodal perception of odor and texture. In four studies, participants tried two textures of face creams, one high viscosity (HV) and one low viscosity (LV), each with one of three levels of added odor (standard level, half of standard, or base [no added odor]), and then reported their levels of well-being. They also reported their perceptions of the face creams, including liking (global liking of the product, liking of its texture) and “objective” evaluations on just about right (JAR) scales (texture and visual appearance evaluations). In Study 1, women in France tried the creams on their hands, as they would when testing them in a store, and in Study 2, a second group of French women tried the creams on their faces, as they would at home. In Studies 3 and 4, these same two procedures were repeated in China. Results showed that both odor and texture had effects on well-being, liking, and JAR ratings, including interaction effects. Though effects varied by country and context (hand or face), the addition of odor to the creams generally increased reports of well-being, global liking and texture liking, in some cases affecting the “objective” evaluations of texture. This is one of the first investigations of crossmodal olfactory and tactile perception's impacts on well-being, and it reinforces previous literature showing the importance of olfaction on well-being.


2021 ◽  
Vol 83 (6) ◽  
pp. 377-381
Author(s):  
Maureen E. Dunbar ◽  
Jacqueline J. Shade

In a traditional anatomy and physiology lab, the general senses – temperature, pain, touch, pressure, vibration, and proprioception – and the special senses – olfaction (smell), vision, gustation (taste), hearing, and equilibrium – are typically taught in isolation. In reality, information derived from these individual senses interacts to produce the complex sensory experience that constitutes perception. To introduce students to the concept of multisensory integration, a crossmodal perception lab was developed. In this lab, students explore how vision impacts olfaction and how vision and olfaction interact to impact flavor perception. Students are required to perform a series of multisensory tasks that focus on the interaction of multiple sensory inputs and their impact on flavor and scent perception. Additionally, students develop their own hypothesis as to which sensory modalities they believe will best assist them in correctly identifying the flavor of a candy: taste alone, taste paired with scent, or taste paired with vision. Together these experiments give students an appreciation for multisensory integration while also encouraging them to actively engage in the scientific method. They are then asked to hypothesize the possible outcome of one last experiment after collecting and assessing data from the prior tasks.


2020 ◽  
Vol 7 ◽  
Author(s):  
Focko L. Higgen ◽  
Philipp Ruppel ◽  
Michael Görner ◽  
Matthias Kerzel ◽  
Norman Hendrich ◽  
...  

The quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age. Here, we propose a new approach to research to which degree the different factors contribute to crossmodal processing and the age-related decline by replicating a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks (ANN). We implemented two ANN models to specifically focus on the relevance of early integration of sensory information during the crossmodal processing stream as a mechanism proposed for efficient processing in the human brain. Applying an adaptive staircase procedure, we approached comparable unimodal classification performance for both modalities in the human participants as well as the ANN. This allowed us to compare crossmodal performance between and within the systems, independent of the underlying unimodal processes. Our data show that unimodal classification accuracies of the tactile sensing technology are comparable to humans. For crossmodal discrimination of the ANN the integration of high-level unimodal features on earlier stages of the crossmodal processing stream shows higher accuracies compared to the late integration of independent unimodal classifications. In comparison to humans, the ANN show higher accuracies than older participants in the unimodal as well as the crossmodal condition, but lower accuracies than younger participants in the crossmodal task. Taken together, we can show that state-of-the-art tactile sensing technology is able to perform a complex tactile recognition task at levels comparable to humans. For crossmodal processing, human inspired early sensory integration seems to improve the performance of artificial neural networks. Still, younger participants seem to employ more efficient crossmodal integration mechanisms than modeled in the proposed ANN. Our work demonstrates how collaborative research in neuroscience and embodied artificial neurocognitive models can help to derive models to inform the design of future neurocomputational architectures.


2019 ◽  
Author(s):  
Focko L. Higgen ◽  
Philipp Ruppel ◽  
Michael Görner ◽  
Matthias Kerzel ◽  
Norman Hendrich ◽  
...  

AbstractThe quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age.To research to which degree impediments of these two abilities contribute to the age-related decline and to evaluate how this might apply to artificial systems, we replicate a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks. We explore the perception of each modality in isolation as well as the crossmodal integration.We show that in an artificial system the integration of complex high-level unimodal features outperforms the comparison of independent unimodal classifications at low stimulus intensities where errors frequently occur. In comparison to humans, the artificial system outperforms older participants in the unimodal as well as the crossmodal condition. However, compared to younger participants, the artificial system performs worse at low stimulus intensities. Younger participants seem to employ more efficient crossmodal integration mechanisms than modelled in the proposed artificial neural networks.Our work creates a bridge between neurological research and embodied artificial neurocognitive systems and demonstrates how collaborative research might help to derive hypotheses from the allied field. Our results indicate that empirically-derived neurocognitive models can inform the design of future neurocomputational architectures. For crossmodal processing, sensory integration on lower hierarchical levels, as suggested for efficient processing in the human brain, seems to improve the performance of artificial neural networks.


2019 ◽  
Vol 79 (5-6) ◽  
pp. 3311-3331 ◽  
Author(s):  
S. Malpica ◽  
A. Serrano ◽  
M. Allue ◽  
M. G. Bedia ◽  
B. Masia

2017 ◽  
Vol 46 (5) ◽  
pp. 749-760 ◽  
Author(s):  
Maddalena Murari ◽  
Emery Schubert ◽  
Antonio Rodà ◽  
Osvaldo Da Pos ◽  
Giovanni De Poli

Can music be rated consistently using icon descriptors without verbal mediation? Sixty-eight participants rated six experimenter- and two self-selected pieces of music along 15 bipolar icon scales intended to portray emotions, and sensory experiences involving color, temperature, shape, speed, texture, and weight. Several replications were reported, including Mozart being blue, Brahms being soft and Bizet being takete (a jagged shape). Crossmodal associations with individual pieces were similar to those reported in previous studies, but self-selected liked and disliked musics did not evoke as many such associations, leading to the conclusion that crossmodal perception may be indicative of music character more so than it is of hedonic tone. The similarity of results in the present study and previous research suggests that icon scales may provide a convenient alternative to sensory scales in various modalities that are difficult to reproduce via current conventional computer interface technology.


2016 ◽  
Vol 87 (2) ◽  
pp. 345-364 ◽  
Author(s):  
Lakshmi Gogate ◽  
Madhavilatha Maganti

Sign in / Sign up

Export Citation Format

Share Document