multisensory cues
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 14)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 15 ◽  
Author(s):  
Klaudia Grechuta ◽  
Javier De La Torre Costa ◽  
Belén Rubio Ballester ◽  
Paul Verschure

The unique ability to identify one’s own body and experience it as one’s own is fundamental in goal-oriented behavior and survival. However, the mechanisms underlying the so-called body ownership are yet not fully understood. Evidence based on Rubber Hand Illusion (RHI) paradigms has demonstrated that body ownership is a product of reception and integration of self and externally generated multisensory information, feedforward and feedback processing of sensorimotor signals, and prior knowledge about the body. Crucially, however, these designs commonly involve the processing of proximal modalities while the contribution of distal sensory signals to the experience of ownership remains elusive. Here we propose that, like any robust percept, body ownership depends on the integration and prediction across all sensory modalities, including distal sensory signals pertaining to the environment. To test our hypothesis, we created an embodied goal-oriented Virtual Air Hockey Task, in which participants were to hit a virtual puck into a goal. In two conditions, we manipulated the congruency of distal multisensory cues (auditory and visual) while preserving proximal and action-driven signals entirely predictable. Compared to a fully congruent condition, our results revealed a significant decrease on three dimensions of ownership evaluation when distal signals were incongruent, including the subjective report as well as physiological and kinematic responses to an unexpected threat. Together, these findings support the notion that the way we represent our body is contingent upon all the sensory stimuli, including distal and action-independent signals. The present data extend the current framework of body ownership and may also find applications in rehabilitation scenarios.


2021 ◽  
Vol 11 (8) ◽  
pp. 1067
Author(s):  
Cosimo Tuena ◽  
Silvia Serino ◽  
Elisa Pedroli ◽  
Marco Stramba-Badiale ◽  
Giuseppe Riva ◽  
...  

Along with deficits in spatial cognition, a decline in body-related information is observed in aging and is thought to contribute to impairments in navigation, memory, and space perception. According to the embodied cognition theories, bodily and environmental information play a crucial role in defining cognitive representations. Thanks to the possibility to involve body-related information, manipulate environmental stimuli, and add multisensory cues, virtual reality is one of the best candidates for spatial memory rehabilitation in aging for its embodied potential. However, current virtual neurorehabilitation solutions for aging and neurodegenerative diseases are in their infancy. Here, we discuss three concepts that could be used to improve embodied representations of the space with virtual reality. The virtual bodily representation is the combination of idiothetic information involved during virtual navigation thanks to input/output devices; the spatial affordances are environmental or symbolic elements used by the individual to act in the virtual environment; finally, the virtual enactment effect is the enhancement on spatial memory provided by actively (cognitively and/or bodily) interacting with the virtual space and its elements. Theoretical and empirical findings will be presented to propose innovative rehabilitative solutions in aging for spatial memory and navigation.


2021 ◽  
pp. 1-22
Author(s):  
Brandy Murovec ◽  
Julia Spaniol ◽  
Jennifer L. Campos ◽  
Behrang Keshavarz

Abstract A critical component to many immersive experiences in virtual reality (VR) is vection, defined as the illusion of self-motion. Traditionally, vection has been described as a visual phenomenon, but more recent research suggests that vection can be influenced by a variety of senses. The goal of the present study was to investigate the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting. To achieve this, 24 adults (Mage = 25.04) were presented with a rotating stimulus aimed to induce circular vection. All participants completed trials that included a single sensory cue, a combination of two cues, or all three cues presented together. The size of the field of view (FOV) was manipulated across four levels (no-visuals, small, medium, full). Participants rated vection intensity and duration verbally after each trial. Results showed that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration. The presence of auditory and tactile cues further increased vection intensity and duration compared to conditions where these cues were not presented. These findings support the idea that vection can be induced via multiple types of sensory inputs and can be intensified when multiple sensory inputs are combined.


2021 ◽  
pp. 109634802110303
Author(s):  
Kexin Guo ◽  
Alei Fan ◽  
Xinran Lehto ◽  
Jonathon Day

Facilitated by emerging technologies, the immersive digital museum reflects disruptive innovation in today’s tourism experience and offers a multidimensional experience different from traditional museums. To better understand how visitors respond to this innovative form of digital tourism, the current research investigates visitor experiences at the digital museum and achieves a threefold goal. First, this research delineates a three-dimensional digital museum visitor experience, namely, joviality, personal escapism, and localness. Built on this experiential framework, the present research further affirms that visual and auditory cues are the most powerful multisensory cue combination in enhancing a holistic visitor experience at the digital museum. This study also finds that emotional state and sense of presence mediate the relations between the multisensory cues and visitors’ digital museum experiences. This research contributes to the conceptualization of a digital museum experience, and provides a foundation for the future research endeavor of the new generation of digital tourism.


Author(s):  
Annika L. Klaffehn ◽  
Florian B. Sellmann ◽  
Wladimir Kirsch ◽  
Wilfried Kunde ◽  
Roland Pfister

AbstractIt has been proposed that statistical integration of multisensory cues may be a suitable framework to explain temporal binding, that is, the finding that causally related events such as an action and its effect are perceived to be shifted towards each other in time. A multisensory approach to temporal binding construes actions and effects as individual sensory signals, which are each perceived with a specific temporal precision. When they are integrated into one multimodal event, like an action-effect chain, the extent to which they affect this event’s perception depends on their relative reliability. We test whether this assumption holds true in a temporal binding task by manipulating certainty of actions and effects. Two experiments suggest that a relatively uncertain sensory signal in such action-effect sequences is shifted more towards its counterpart than a relatively certain one. This was especially pronounced for temporal binding of the action towards its effect but could also be shown for effect binding. Other conceptual approaches to temporal binding cannot easily explain these results, and the study therefore adds to the growing body of evidence endorsing a multisensory approach to temporal binding.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248225
Author(s):  
Natalia Cooper ◽  
Ferdinando Millela ◽  
Iain Cant ◽  
Mark D. White ◽  
Georg Meyer

Virtual reality (VR) can create safe, cost-effective, and engaging learning environments. It is commonly assumed that improvements in simulation fidelity lead to better learning outcomes. Some aspects of real environments, for example vestibular or haptic cues, are difficult to recreate in VR, but VR offers a wealth of opportunities to provide additional sensory cues in arbitrary modalities that provide task relevant information. The aim of this study was to investigate whether these cues improve user experience and learning outcomes, and, specifically, whether learning using augmented sensory cues translates into performance improvements in real environments. Participants were randomly allocated into three matched groups: Group 1 (control) was asked to perform a real tyre change only. The remaining two groups were trained in VR before performance was evaluated on the same, real tyre change task. Group 2 was trained using a conventional VR system, while Group 3 was trained in VR with augmented, task relevant, multisensory cues. Objective performance, time to completion and error number, subjective ratings of presence, perceived workload, and discomfort were recorded. The results show that both VR training paradigms improved performance for the real task. Providing additional, task-relevant cues during VR training resulted in higher objective performance during the real task. We propose a novel method to quantify the relative performance gains between training paradigms that estimates the relative gain in terms of training time. Systematic differences in subjective ratings that show comparable workload ratings, higher presence ratings and lower discomfort ratings, mirroring objective performance measures, were also observed. These findings further support the use of augmented multisensory cues in VR environments as an efficient method to enhance performance, user experience and, critically, the transfer of training from virtual to real environment scenarios.


2020 ◽  
Vol 223 (21) ◽  
pp. jeb229245 ◽  
Author(s):  
Ke Deng ◽  
Qiao-Ling He ◽  
Ya Zhou ◽  
Bi-Cheng Zhu ◽  
Tong-Liang Wang ◽  
...  

ABSTRACTThere is increasing evidence that many anurans use multimodal cues to detect, discriminate and/or locate conspecifics and thus modify their behaviors. To date, however, most studies have focused on the roles of multimodal cues in female choice or male–male interactions. In the present study, we conducted an experiment to investigate whether male serrate-legged small treefrogs (Kurixalus odontotarsus) used visual or chemical cues to detect females and thus altered their competition strategies in different calling contexts. Three acoustic stimuli (advertisement calls, aggressive calls and compound calls) were broadcast in a randomized order after a spontaneous period to focal males in one of four treatment groups: combined visual and chemical cues of a female, only chemical cues, only visual cues and a control (with no females). We recorded the vocal responses of the focal males during each 3 min period. Our results demonstrate that males reduce the total number of calls in response to the presence of females, regardless of how they perceived the females. In response to advertisement calls and compound calls, males that perceived females through chemical cues produced relatively fewer advertisement calls but more aggressive calls. In addition, they produced relatively more aggressive calls during the playback of aggressive calls. Taken together, our study suggests that male K. odontotarsus adjust their competition strategies according to the visual or chemical cues of potential mates and highlights the important role of multisensory cues in male frogs' perception of females.


2019 ◽  
Vol 32 (8) ◽  
pp. 771-796 ◽  
Author(s):  
Samuel Couth ◽  
Daniel Poole ◽  
Emma Gowen ◽  
Rebecca A. Champion ◽  
Paul A. Warren ◽  
...  

Abstract Multisensory integration typically follows the predictions of a statistically optimal model whereby the contribution of each sensory modality is weighted according to its reliability. Previous research has shown that multisensory integration is affected by ageing, however it is less certain whether older adults follow this statistically optimal model. Additionally, previous studies often present multisensory cues which are conflicting in size, shape or location, yet naturally occurring multisensory cues are usually non-conflicting. Therefore, the mechanisms of integration in older adults might differ depending on whether the multisensory cues are consistent or conflicting. In the current experiment, young () and older () adults were asked to make judgements regarding the height of wooden blocks using visual, haptic or combined visual–haptic information. Dual modality visual–haptic blocks could be presented as equal or conflicting in size. Young and older adults’ size discrimination thresholds (i.e., precision) were not significantly different for visual, haptic or visual–haptic cues. In addition, both young and older adults’ discrimination thresholds and points of subjective equality did not follow model predictions of optimal integration, for both conflicting and non-conflicting cues. Instead, there was considerable between subject variability as to how visual and haptic cues were processed when presented simultaneously. This finding has implications for the development of multisensory therapeutic aids and interventions to assist older adults with everyday activities, where these should be tailored to the needs of each individual.


Sign in / Sign up

Export Citation Format

Share Document