scholarly journals Predicting head rotation using EEG to enhance streaming of images to a Virtual Reality headset

2018 ◽  
Vol 12 ◽  
Author(s):  
Anne-Marie Brouwer ◽  
Jasper Van Der Waa ◽  
Hans Stokking
Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1448 ◽  
Author(s):  
Youngwon Ryan Kim ◽  
Hyeonah Choi ◽  
Minwook Chang ◽  
Gerard J. Kim

Recently, a new breed of mobile virtual reality (dubbed as “EasyVR” in this work), has appeared in the form of conveniently clipping on a non-isolating magnifying lenses on the smartphone, still offering a reasonable level of immersion to using the isolated headset. Furthermore, such a form factor allows the fingers to touch the screen and select objects quite accurately, despite the finger(s) being seen unfocused over the lenses. Many navigation techniques have existed for both casual smartphone 3D applications using the touchscreen and immersive VR environments using the various controllers/sensors. However, no research has focused on the proper navigation interaction technique for a platform like EasyVR which necessitates the use of the touchscreen while holding the display device to the head and looking through the magnifying lenses. To design and propose the most fitting navigation method(s) with EasyVR, we mixed and matched the conventional touchscreen based and headset oriented navigation methods to come up with six viable navigation techniques—more specifically for selecting the travel direction and invoking the movement itself—including the use of head-rotation, on-screen keypads/buttons, one-touch teleport, drag-to-target, and finger gestures. These methods were experimentally compared for their basic usability and the level of immersion in navigating in 3D space with six degrees of freedom. The results provide a valuable guideline for designing/choosing the proper navigation method under different navigational needs of the given VR application.


2020 ◽  
Vol 2020 (13) ◽  
pp. 339-1-339-6
Author(s):  
Lucas Wright ◽  
Lara Chunko ◽  
Kelsey Benjamin ◽  
Emmanuelle Hernandez ◽  
Jack Miller ◽  
...  

According to the CDC, over three thousand people die every year from drowning in the United States. Many of these fatalities are preventable with properly trained lifeguards. Traditional lifeguard training relies on videos and mock rescues. While these methods are important, they have their shortcomings. Videos are static and do not build muscle memory. Mock rescues are labor-intensive and potentially put others in danger. Virtual reality (VR) can be used as an alternative training tool, building muscle memory in a fully controlled and safe environment. With full control over variables such as weather, population, and other distractions, lifeguards can be better equipped to respond to any situation. The single most important aspect of life guarding is finding the victim. This head rotation skill can be practiced and perfected in VR before guards ever get onto the stand. It also allows guards to practice in uncommon but nevertheless dangerous conditions such as fog and large crowds. VR also allows the user to get immediate feedback about performance and where they can improve.


2018 ◽  
Vol 51 (1) ◽  
pp. 96-107 ◽  
Author(s):  
Jianying Bai ◽  
Min Bao ◽  
Tao Zhang ◽  
Yi Jiang

2021 ◽  
Author(s):  
Jacob Thomas Thorn ◽  
Naig Aurelia Ludmilla Chenais ◽  
Sandrine Hinrichs ◽  
Marion Chatelain ◽  
Diego Ghezzi

Objective: Temporal resolution is a key challenge in artificial vision. Several prosthetic approaches are limited by the perceptual fading of evoked phosphenes upon repeated stimulation from the same electrode. Therefore, implanted patients are forced to perform active scanning, via head movements, to refresh the visual field viewed by the camera. However, active scanning is a draining task, and it is crucial to find compensatory strategies to reduce it. Approach: To address this question, we implemented perceptual fading in simulated prosthetic vision using virtual reality. Then, we quantified the effect of fading on two indicators: the time to complete a reading task and the head rotation during the task. We also tested if stimulation strategies previously proposed to increase the persistence of responses in retinal ganglion cells to electrical stimulation could improve these indicators. Main results: This study shows that stimulation strategies based on interrupted pulse trains and randomisation of the pulse duration allows significant reduction of both the time to complete the task and the head rotation during the task. Significance: The stimulation strategy used in retinal implants is crucial to counteract perceptual fading and to reduce active head scanning during prosthetic vision. In turn, less active scanning might improve the patient's comfort in artificial vision.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e3023 ◽  
Author(s):  
Daniel S. Harvie ◽  
Ross T. Smith ◽  
Estin V. Hunter ◽  
Miles G. Davis ◽  
Michele Sterling ◽  
...  

BackgroundIllusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement.MethodIn a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50oof rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50oof real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360oimmersive virtual reality with and without three-dimensional properties, was also investigated.ResultsPerception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene.DiscussionVirtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.


2020 ◽  
Author(s):  
Nicola C Anderson ◽  
Walter F. Bischof ◽  
Tom Foulsham ◽  
Alan Kingstone

Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, while others did not. Interestingly, the head rotated in line with the rotation of landscape, but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control, and how insights from VR might inform more traditional eye-tracking studies.


2021 ◽  
Author(s):  
Steve Blandino ◽  
Tanguy Ropitault ◽  
Raied Caromi ◽  
Jacob Chakareski ◽  
Mahmudur Khan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document