scholarly journals The systematic evaluation of an embodied control interface for virtual reality

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0259977
Author(s):  
Kenan Bektaş ◽  
Tyler Thrash ◽  
Mark A. van Raai ◽  
Patrik Künzler ◽  
Richard Hahnloser

Embodied interfaces are promising for virtual reality (VR) because they can improve immersion and reduce simulator sickness compared to more traditional handheld interfaces (e.g., gamepads). We present a novel embodied interface called the Limbic Chair. The chair is composed of two separate shells that allow the user’s legs to move independently while sitting. We demonstrate the suitability of the Limbic Chair in two VR scenarios: city navigation and flight simulation. We compare the Limbic Chair to a gamepad using performance measures (i.e., time and accuracy), head movements, body sway, and standard questionnaires for measuring presence, usability, workload, and simulator sickness. In the city navigation scenario, the gamepad was associated with better presence, usability, and workload scores. In the flight simulation scenario, the chair was associated with less body sway (i.e., less simulator sickness) and fewer head movements but also slower performance and higher workload. In all other comparisons, the Limbic Chair and gamepad were similar, showing the promise of the Chair for replacing some control functions traditionally executed using handheld devices.

Author(s):  
Moshe M. H. Aharoni ◽  
Anat V. Lubetzky ◽  
Liraz Arie ◽  
Tal Krasovsky

Abstract Background Persistent postural-perceptual dizziness (PPPD) is a condition characterized by chronic subjective dizziness and exacerbated by visual stimuli or upright movement. Typical balance tests do not replicate the environments known to increase symptoms in people with PPPD—crowded places with moving objects. Using a virtual reality system, we quantified dynamic balance in people with PPPD and healthy controls in diverse visual conditions. Methods Twenty-two individuals with PPPD and 29 controls performed a square-shaped fast walking task (Four-Square Step Test Virtual Reality—FSST-VR) using a head-mounted-display (HTC Vive) under 3 visual conditions (empty train platform; people moving; people and trains moving). Head kinematics was used to measure task duration, movement smoothness and anterior–posterior (AP) and medio-lateral (ML) ranges of movement (ROM). Heart rate (HR) was monitored using a chest-band. Participants also completed a functional mobility test (Timed-Up-and-Go; TUG) and questionnaires measuring anxiety (State-Trait Anxiety Inventory; STAI), balance confidence (Activities-Specific Balance Confidence; ABC), perceived disability (Dizziness Handicap Inventory) and simulator sickness (Simulator Sickness Questionnaire). Main effects of visual load and group and associations between performance, functional and self-reported outcomes were examined. Results State anxiety and simulator sickness did not increase following testing. AP-ROM and HR increased with high visual load in both groups (p < 0.05). There were no significant between-group differences in head kinematics. In the high visual load conditions, high trait anxiety and longer TUG duration were moderately associated with reduced AP and ML-ROM in the PPPD group and low ABC and  high perceived disability were associated with reduced AP-ROM (|r| =  0.47 to 0.53; p < 0.05). In contrast, in controls high STAI-trait, low ABC and longer TUG duration were associated with increased AP-ROM (|r| = 0.38 to 0.46; p < 0.05) and longer TUG duration was associated with increased ML-ROM (r = 0.53, p < 0.01). Conclusions FSST-VR may shed light on movement strategies in PPPD beyond task duration. While no main effect of group was observed, the distinct associations with self-reported and functional outcomes, identified using spatial head kinematics, suggest that some people with PPPD reduce head degrees of freedom when performing a dynamic balance task. This supports a potential link between spatial perception and PPPD symptomatology.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


2021 ◽  
Vol 13 (9) ◽  
pp. 4716
Author(s):  
Moustafa M. Nasralla

To develop sustainable rehabilitation systems, these should consider common problems on IoT devices such as low battery, connection issues and hardware damages. These should be able to rapidly detect any kind of problem incorporating the capacity of warning users about failures without interrupting rehabilitation services. A novel methodology is presented to guide the design and development of sustainable rehabilitation systems focusing on communication and networking among IoT devices in rehabilitation systems with virtual smart cities by using time series analysis for identifying malfunctioning IoT devices. This work is illustrated in a realistic rehabilitation simulation scenario in a virtual smart city using machine learning on time series for identifying and anticipating failures for supporting sustainability.


2015 ◽  
Vol 24 (4) ◽  
pp. 298-321 ◽  
Author(s):  
Ernesto de la Rubia ◽  
Antonio Diaz-Estrella

Virtual reality has become a promising field in recent decades, and its potential now seems clearer than ever. With the development of handheld devices and wireless technologies, interest in virtual reality is also increasing. Therefore, there is an accompanying interest in inertial sensors, which can provide such advantages as small size and low cost. Such sensors can also operate wirelessly and be used in an increasing number of interactive applications. An example related to virtual reality is the ability to move naturally through virtual environments. This is the objective of the real-walking navigation technique, for which a number of advantages have previously been reported in terms of presence, object searching, and collision, among other concerns. In this article, we address the use of foot-mounted inertial sensors to achieve real-walking navigation in a wireless virtual reality system. First, an overall description of the problem is presented. Then, specific difficulties are identified, and a corresponding technique is proposed to overcome each: tracking of foot movements; determination of the user’s position; percentage estimation of the gait cycle, including oscillating movements of the head; stabilization of the velocity of the point of view; and synchronization of head and body yaw angles. Finally, a preliminary evaluation of the system is conducted in which data and comments from participants were collected.


2017 ◽  
Vol 53 (Supplement2) ◽  
pp. S442-S445
Author(s):  
Bingcheng Wang ◽  
Pei-Luen Patrick Rau ◽  
Lili Dong

2019 ◽  
Vol 25 (9) ◽  
pp. 859-861 ◽  
Author(s):  
Greg M. Reger ◽  
Derek Smolenski ◽  
Amanda Edwards-Stewart ◽  
Nancy A. Skopp ◽  
Albert “Skip” Rizzo ◽  
...  

Author(s):  
Muthukkumar S. Kadavasal ◽  
Abhishek Seth ◽  
James H. Oliver

A multi modal teleoperation interface is introduced featuring an integrated virtual reality based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between human and on board vehicle intelligence. The stereo vision based obstacle avoidance system is initially implemented on video based teleoperation architecture and experimental results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.


Author(s):  
Zolta´n Rusa´k ◽  
Csaba Antonya ◽  
Wilfred van der Vegte ◽  
Imre Horva´th ◽  
Edit Varga

Customer evaluation of concepts plays an important role in the design of handheld devices, such as bottles of douche gels and shampoos, where the phenomenon of grasping needs to be evaluated. In these applications important information on the aspects of ergonomics and user behaviors could be gathered from computer simulation. It is our ultimate goal to develop an environment in which users and designers can freely interact with product concepts. In our approach to grasping simulation there is no tactile feedback and we do not measure the exerted grasping forces. There is no wiring of the human hand, and the users are not limited in their movements. We measure the motion of the human hand, compute the grasping forces based on anthropometric data, and simulate the reaction of product concepts in a physically based virtual reality environment. Our contribution consists of: (i) a method, which takes into account the anatomy of the human hand in order to determine the maximum grasping forces, and (ii) an approach which enables to control the grasping forces based on (a) the penetration of the virtual human hand into the virtual model of product concept (b) the posture of the grasping, and (c) the angles of the joints. The paper reports on the framework of our approach and presents an application.


Sign in / Sign up

Export Citation Format

Share Document