scholarly journals Emotional arousal in 2D versus 3D virtual reality environments

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256211
Author(s):  
Feng Tian ◽  
Minlei Hua ◽  
Wenrui Zhang ◽  
Yingjie Li ◽  
Xiaoli Yang

Previous studies have suggested that virtual reality (VR) can elicit emotions in different visual modes using 2D or 3D headsets. However, the effects on emotional arousal by using these two visual modes have not been comprehensively investigated, and the underlying neural mechanisms are not yet clear. This paper presents a cognitive psychological experiment that was conducted to analyze how these two visual modes impact emotional arousal. Forty volunteers were recruited and were randomly assigned to two groups. They were asked to watch a series of positive, neutral and negative short VR videos in 2D and 3D. Multichannel electroencephalograms (EEG) and skin conductance responses (SCR) were recorded simultaneously during their participation. The results indicated that emotional stimulation was more intense in the 3D environment due to the improved perception of the environment; greater emotional arousal was generated; and higher beta (21–30 Hz) EEG power was identified in 3D than in 2D. We also found that both hemispheres were involved in stereo vision processing and that brain lateralization existed in the processing.

2020 ◽  
pp. 109634802094443
Author(s):  
Marcel Bastiaansen ◽  
Monique Oosterholt ◽  
Ondrej Mitas ◽  
Danny Han ◽  
Xander Lub

Emotions are crucial ingredients of meaningful and memorable tourism experiences. Research methods borrowed from experimental psychology are prime candidates for quantifying emotions while experiences are unfolding. The present article empirically evaluates the methodological feasibility and usefulness of ambulatory recordings of skin conductance responses (SCRs) during a tourism experience. We recorded SCRs in participants while they experienced a roller-coaster ride with or without a virtual reality (VR) headset. Ride elements were identified that related to physical aspects (such as accelerations and braking), to events in the VR environment, and to the physical theming of the roller coaster. VR rides were evaluated more positively than normal rides. SCR time series were meaningfully related to the different ride elements. SCR signals did not significantly predict overall evaluations of the ride. We conclude that psychophysiological measurements are a new avenue for understanding how hospitality, tourism and leisure experiences dynamically develop over time.


Author(s):  
Muthukkumar S. Kadavasal ◽  
Abhishek Seth ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between human and on board vehicle intelligence. The stereo vision based obstacle avoidance system is initially implemented on video based teleoperation architecture and experimental results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.


Author(s):  
Tamer M. Wasfy

LEA (Learning Environments Agent) is a web-based software system for advanced multimedia and virtual-reality education and training. LEA consists of three fully integrated components: (1) unstructured knowledge-base engine for lecture delivery; (2) structured hierarchical process knowledge-base engine for step-by-step process training; and (3) hierarchical rule-based expert system for natural-language understanding. In addition, LEA interfaces with components which provide the following capabilities: 3D near photo-realistic interactive virtual environments; 2D animated multimedia; near-natural synthesized text-to-speech, speech recognition, near-photorealistic animated virtual humans to act as instructors and assistants; and socket-based network communication. LEA provides the following education and training functions: multimedia lecture delivery; virtual-reality based step-by-step process training; and testing capability. LEA can deliver compelling multimedia lectures and content in science fields (such as engineering, physics, math, and chemistry) that include synchronized: animated 2D and 3D graphics, speech, and written/highlighted text. In addition, it can be used to deliver step-by-step process training in a compelling near-photorealistic 3D virtual environment. In this paper the LEA system is presented along with typical educational and training applications.


Author(s):  
Manuel Jesus Domínguez-Morales ◽  
Elena Cerezuela-Escudero ◽  
Fernando Perez-Peña ◽  
Angel Jimenez-Fernandez ◽  
Alejandro Linares-Barranco ◽  
...  

1999 ◽  
Vol 19 (4) ◽  
pp. 10-13 ◽  
Author(s):  
P. Cohen ◽  
D. McGee ◽  
S. Oviatt ◽  
L. Wu ◽  
J. Clow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document