Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality

2018 ◽  
Vol 31 (7) ◽  
pp. 689-713 ◽  
Author(s):  
Hudson Diggs Bailey ◽  
Aidan B. Mullaney ◽  
Kyla D. Gibney ◽  
Leslie Dowell Kwakye

Abstract We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.

2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.


2019 ◽  
Vol 21 (8) ◽  
pp. 1734-1749 ◽  
Author(s):  
Kaylee Payne Kruzan ◽  
Andrea Stevenson Won

How the body is perceived through media is key to many well-being interventions. Researchers have examined the effects of platforms on users’ self-perceptions, including immersive virtual reality, nonimmersive virtual worlds, and social media such as Facebook. In this article, we use several conceptions of levels of embodiment to compare empirical work on the effects of virtual reality and social media as they relate to perceptions and conceptions of the self and body. We encourage social media researchers to utilize research on embodiment in virtual reality to help frame the effects of social media use on well-being. Similarly, researchers in immersive media should consider the opportunities and risks that may arise as embodied experiences become more social. We conclude our discussion with implications for future applications in mental health.


Author(s):  
Giordano Márcio Gatinho Bonuzzi ◽  
Tatiana Beline de Freitas ◽  
Gisele Carla dos Santos Palma ◽  
Marcos Antonio Arlindo Soares ◽  
Belinda Lange ◽  
...  

2021 ◽  
Vol 2 ◽  
Author(s):  
Joakim Vindenes ◽  
Barbara Wasson

Virtual Reality (VR) is a remarkably flexible technology for interventions as it allows the construction of virtual worlds with ontologies radically different from the real world. By embodying users in avatars situated in these virtual environments, researchers can effectively intervene and instill positive change in the form of therapy or education, as well as affect a variety of cognitive changes. Due to the capabilities of VR to mediate both the environments in which we are immersed, as well as our embodied, situated relation toward those environments, VR has become a powerful technology for “changing the self.” As the virtually mediated experience is what renders these interventions effective, frameworks are needed for describing and analyzing the mediations brought by various virtual world designs. As a step toward a broader understanding of how VR mediates experience, we propose a post-phenomenological framework for describing VR mediation. Postphenomenology is a philosophy of technology concerned with empirical data that understand technologies as mediators of human-world relationships. By addressing how mediations occur within VR as a user-environment relation and outside VR as a human-world relation, the framework addresses the various constituents of the virtually mediated experience. We demonstrate the framework's capability for describing VR mediations by presenting the results of an analysis of a selected variety of studies that use various user-environment relations to mediate various human-world relations.


2012 ◽  
Vol 3 (1) ◽  
pp. 13-18 ◽  
Author(s):  
James Mayrose

Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and has been widely adopted by the military, industry and academia. This current study set out to study the effectiveness of 3D interactive environments on learning, engagement, and preference. A total of 180 students took part in the study where significant results were found regarding preference for this new technology over standard educational practices. Students were more motivated when using the immersive environment than with traditional methods which may translate into greater learning and retention. Larger studies will need to be performed in order to quantify the benefits of this new, cutting edge technology, as it relates to understanding and retention of educational content. 


2012 ◽  
Vol 2 (3) ◽  
Author(s):  
Branislav Sobota ◽  
Štefan Korečko ◽  
František Hrozek

AbstractThe paper deals with an issue of a design, development and implementation of a fully immersive virtual reality (VR) system and corresponding virtual worlds, specified in an object-oriented fashion. A virtual world object structure, reflecting a division of VR system into subsystems with respect to affected senses, is introduced. It also discusses virtual worlds building process, utilizing the software development technique of stepwise refinement, and possibilities of parallel processing in VR systems. The final part describes a VR system that has been implemented at the home institution of the authors according to some of the ideas presented here.


2010 ◽  
Vol 1 (1) ◽  
pp. 39 ◽  
Author(s):  
Luis A. Hernández Ibáñez

<p>Galicia Dixital is an exhibition located in Santiago de Compostela whith the mission to show the culture and heritage of this region through the use of new audiovisual technologies, whilst to demonstrate the use and applications of avant-garde technology. This paper describes some of the installations present there with special emphasis in The Empty Museum, a fully immersive virtual reality installation where the user can walk physically visiting virtual worlds. A group of examples of contents designed for this medium will be also described.</p>


2018 ◽  
Vol 33 (80) ◽  
pp. 9-27
Author(s):  
Anders Engberg-Pedersen

Anders Engberg-Pedersen: “Serious games. Harun Farocki and MilitaryAesthetics”This article charts the emergence of a military-aesthetic regime in the twenty-first century. It shows how the US military has co-opted and militarized the field of aesthetics through the development of virtual worlds that train, prepare, and process military engagements. Using the German artist Harun Farocki’s installation Serious Games as a prism for this development, the essay charts the collaborations between military institutions, academics, and the creative industries. The key question is: what happens to the notion of “war experience” in the age of immersive virtual reality technologies? To find plausible answers, the article situates military aesthetics along a historical axis with the emergence of the modern wargame around 1800, and along a theoretical axis by drawing on key thinkers in philosophical aesthetics (Baumgarten, Dewey, Rancière).


2019 ◽  
Author(s):  
Jonathan W. P. Kuziek ◽  
Abdel R. Tayem ◽  
Jennifer I. Burrell ◽  
Eden X. Redman ◽  
Jeff Murray ◽  
...  

Electroencephalography (EEG) research is typically conducted in controlled laboratory settings. This limits the generalizability to real-world situations. Virtual reality (VR) sits as a transitional tool that provides tight experimental control with more realistic stimuli. To test the validity of using VR for event-related potential (ERP) research we used a well-established paradigm, the oddball task. For our first study, we compared VR to traditional, monitor-based stimulus presentation using visual and auditory oddball tasks while EEG data was recorded. We were able to measure ERP waveforms typically associated with such oddball tasks, namely the P3 and earlier N2 components, in both conditions. Our results suggest that ERPs collected using VR head mounted displays and typical monitors were comparable on measures of latency, amplitude, and spectral composition. In a second study, we implemented a novel depth-based oddball task and we were able to measure the typical oddball-related ERPs elicited by the presentation of near and far stimuli. Interestingly, we observed significant differences in early ERPs components between near and far stimuli, even after controlling for the effects of the oddball task. Current results suggest that VR can serve as a valid means of stimulus presentation in novel or otherwise inaccessible environments for EEG experimentation. We demonstrated the capability of a depth-based oddball in reliably eliciting a P3 waveform. We also found an interaction between the depth at which objects are presented and early ERP responses. Further research is warranted to better explain this influence of depth on the EEG and ERP activity.


Sign in / Sign up

Export Citation Format

Share Document