scholarly journals Proprioception in Immersive Virtual Reality

2021 ◽  
Author(s):  
Alexander Vladimirovich Zakharov ◽  
Alexander Vladimirovich Kolsanov ◽  
Elena Viktorovna Khivintseva ◽  
Vasiliy Fedorovich Pyatin ◽  
Alexander Vladimirovich Yashkov

Currently, in connection with the advent of virtual reality (VR) technologies, methods that recreate sensory sensations are rapidly developing. Under the conditions of VR, which is an immersive environment, a variety of multimodal sensory experiences can be obtained. It is urgent to create explicit immersive environments that allow maximizing the full potential of VR technology. Activation of the proprioceptive sensory system, coupled with the activation of the visual analyzer system, allows you to achieve sensations of interaction with VR objects, identical to the sensations of the real physical world. Today, the activation of proprioceptive sensations is achieved using various devices, including robotic ones, which are not available for use in routine medical practice. The immersive multisensory environment makes it possible to significantly personalize the rehabilitation process, ensuring its continuity and effectiveness at various stages of the pathological process and varying degrees of severity of physical disorders, while significantly reducing the burden on the healthcare system by automating the rehabilitation process and objectively assessing the effectiveness. Further development and increased availability of VR technologies and devices that allow achieving an increase in immersion due to sensory immersion will be in great demand as a technology that allows teaching patients motor skills.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.



Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1537
Author(s):  
Florin Covaciu ◽  
Adrian Pisla ◽  
Anca-Elena Iordan

The traditional systems used in the physiotherapy rehabilitation process are evolving towards more advanced systems that use virtual reality (VR) environments so that the patient in the rehabilitation process can perform various exercises in an interactive way, thus improving the patient’s motivation and reducing the therapist’s work. The paper presents a VR simulator for an intelligent robotic system of physiotherapeutic rehabilitation of the ankle of a person who has had a stroke. This simulator can interact with a real human subject by attaching a sensor that contains a gyroscope and accelerometer to identify the position and acceleration of foot movement on three axes. An electromyography (EMG) sensor is also attached to the patient’s leg muscles to measure muscle activity because a patient who is in a worse condition has weaker muscle activity. The data collected from the sensors are taken by an intelligent module that uses machine learning to create new levels of exercise and control of the robotic rehabilitation structure of the virtual environment. Starting from these objectives, the virtual reality simulator created will have a low dependence on the therapist, this being the main improvement over other simulators already created for this purpose.





Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.



2009 ◽  
Author(s):  
C Nedess ◽  
◽  
A Friedewald ◽  
C Schafer ◽  
S Schleusener ◽  
...  


2019 ◽  
Vol 2019 ◽  
pp. 1-6 ◽  
Author(s):  
Jianjun Cui ◽  
Shih-Ching Yeh ◽  
Si-Huei Lee

Frozen shoulder is a common clinical shoulder condition. Measuring the degree of shoulder joint movement is crucial to the rehabilitation process. Such measurements can be used to evaluate the severity of patients’ condition, establish rehabilitation goals and appropriate activity difficulty levels, and understand the effects of rehabilitation. Currently, measurements of the shoulder joint movement degree are typically conducted by therapists using a protractor. However, along with the growth of telerehabilitation, measuring the shoulder joint mobility on patients’ own at home will be needed. In this study, wireless inertial sensors were combined with the virtual reality interactive technology to provide an innovative shoulder joint mobility self-measurement system that can enable patients to measure their performance of four shoulder joint movements on their own at home. Pilot clinical trials were conducted with 25 patients to confirm the feasibility of the system. In addition, the results of correlation and differential analyses compared with the results of traditional measurement methods exhibited a high correlation, verifying the accuracy of the proposed system. Moreover, according to interviews with patients, they are confident in their ability to measure shoulder joint mobility themselves.



2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-23
Author(s):  
Marco Moran-Ledesma ◽  
Oliver Schneider ◽  
Mark Hancock

When interacting with virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of leveraging their knowledge from the physical world. However, people may prefer physical props over handheld controllers to input gestures in VR. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures.



2018 ◽  
pp. 1377-1392
Author(s):  
Yogendra Patil ◽  
Guilherme Galdino Siqueira ◽  
Iara Brandao ◽  
Fei Hu

Stroke rehabilitation techniques have gathered an immense attention due to the addition of virtual reality environment for rehabilitation purposes. Current techniques involve ideas such as imitating various stroke rehabilitation exercises in virtual world. This makes rehabilitation process more attractive as compared to conventional methods and motivates the patient to continue the therapy. However, most of the virtual reality based stroke rehabilitation studies focus on patient performing sedentary rehabilitation exercises. In this chapter, we introduce our virtual reality based post stroke rehabilitation system that allows a post stroke patient to perform dynamic exercises. With the introduction of our system, we hope to increase post stroke patient's ability to perform their daily routine exercises independently. Our discussion in this chapter is mainly centered around collaboration of rehabilitation system with virtual reality software. We also detail the design process of our modern user interface for collecting useful data during rehabilitation. A simple experiment is carried out to validate the visibility of our system.



Author(s):  
Holger Giese ◽  
Stefan Henkler ◽  
Martin Hirsch ◽  
Vladimir Rubin ◽  
Matthias Tichy

Software has become the driving force in the evolution of many systems, such as embedded systems (especially automotive applications), telecommunication systems, and large scale heterogeneous information systems. These so called software-intensive systems, are characterized by the fact that software influences the design, construction, deployment, and evolution of the whole system. Furthermore, the development of these systems often involves a multitude of disciplines. Besides the traditional engineering disciplines (e.g., control engineering, electrical engineering, and mechanical engineering) that address the hardware and its control, often the system has to be aligned with the organizational structures and workflows as addressed by business process engineering. The development artefacts of all these disciplines have to be combined and integrated in the software. Consequently, software-engineering adopts the central role for the development of these systems. The development of software-intensive systems is further complicated by the fact that future generations of software-intensive systems will become even more complex and, thus, pose a number of challenges for the software and its integration of the other disciplines. It is expected that systems become highly distributed, exhibit adaptive and anticipatory behavior, and act in highly dynamic environments interfacing with the physical world. Consequently, modeling as an essential design activity has to support not only the different disciplines but also the outlined new characteristics. Tool support for the model-driven engineering with this mix of composed models is essential to realize the full potential of software-intensive systems. In addition, modeling activities have to cover different development phases such as requirements analysis, architectural design, and detailed design. They have to support later phases such as implementation and verification and validation, as well as to systematically and efficiently develop systems.



Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2944
Author(s):  
Ilesanmi Olade ◽  
Charles Fleming ◽  
Hai-Ning Liang

Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (<50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems.



Sign in / Sign up

Export Citation Format

Share Document