scholarly journals Turning the (virtual) world around: patterns in saccade direction vary with picture orientation and shape in virtual reality

2020 ◽  
Author(s):  
Nicola C Anderson ◽  
Walter F. Bischof ◽  
Tom Foulsham ◽  
Alan Kingstone

Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, while others did not. Interestingly, the head rotated in line with the rotation of landscape, but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control, and how insights from VR might inform more traditional eye-tracking studies.

2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Nicola C. Anderson ◽  
Walter F. Bischof

Video stream: https://vimeo.com/356859979 Production and  publication of the video stream was sponsored by SCIANS Ltd  http://www.scians.ch/ We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments.


2021 ◽  
Author(s):  
Jacob Thomas Thorn ◽  
Naig Aurelia Ludmilla Chenais ◽  
Sandrine Hinrichs ◽  
Marion Chatelain ◽  
Diego Ghezzi

Objective: Temporal resolution is a key challenge in artificial vision. Several prosthetic approaches are limited by the perceptual fading of evoked phosphenes upon repeated stimulation from the same electrode. Therefore, implanted patients are forced to perform active scanning, via head movements, to refresh the visual field viewed by the camera. However, active scanning is a draining task, and it is crucial to find compensatory strategies to reduce it. Approach: To address this question, we implemented perceptual fading in simulated prosthetic vision using virtual reality. Then, we quantified the effect of fading on two indicators: the time to complete a reading task and the head rotation during the task. We also tested if stimulation strategies previously proposed to increase the persistence of responses in retinal ganglion cells to electrical stimulation could improve these indicators. Main results: This study shows that stimulation strategies based on interrupted pulse trains and randomisation of the pulse duration allows significant reduction of both the time to complete the task and the head rotation during the task. Significance: The stimulation strategy used in retinal implants is crucial to counteract perceptual fading and to reduce active head scanning during prosthetic vision. In turn, less active scanning might improve the patient's comfort in artificial vision.


2020 ◽  
Author(s):  
Walter F. Bischof ◽  
Nicola C Anderson ◽  
Michael T. Doswell ◽  
Alan Kingstone

How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omni-directional panoramic scenes, composed of both landscape and fractal images, using a virtual-reality (VR) viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions; the spatial distribution of head positions and the distribution of head shifts; as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behaviour best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic VRenvironment, and the results demonstrate that a VR environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behaviour.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


2021 ◽  
pp. 1-9
Author(s):  
Chiheon Kwon ◽  
Yunseo Ku ◽  
Shinhye Seo ◽  
Eunsook Jang ◽  
Hyoun-Joong Kong ◽  
...  

BACKGROUND: Low success and high recurrence of benign paroxysmal positional vertigo (BPPV) after home-based self-treated Epley and Barbeque (BBQ) roll maneuvers is an important issue. OBJECTIVE: To quantify the cause of low success rate of self-treated Epley and BBQ roll maneuvers and provide a clinically acceptable criterion to guide self-treatment head rotations. METHODS: Twenty-five participants without active BPPV wore a custom head-mount rotation monitoring device for objective measurements. Self-treatment and specialist-assisted maneuvers were compared for head rotation accuracy. Absolute differences between the head rotation evaluation criteria (American Academy of Otolaryngology guidelines) and measured rotation angles were considered as errors. Self-treatment and specialist-treated errors in maneuvers were compared. Between-trial variations and age effects were evaluated. RESULTS: A significantly large error and between-trial variation occurred in step 4 of the self-treated Epley maneuver, with a considerable error in the second trial. The cumulative error of all steps of self-treated BBQ roll maneuver was significantly large. Age effect occurred only in the self-treated BBQ roll maneuver. Errors in specialist-treated maneuvers ranged from 10 to 20 degrees. CONCLUSIONS: Real-time feedback of head movements during simultaneous head-body rotations could increase success rates of self-treatments. Specialist-treated maneuvers can be used as permissible rotation margin criteria.


1997 ◽  
Vol 7 (4) ◽  
pp. 303-310
Author(s):  
James R. Lackner ◽  
Paul DiZio

The reafference model has frequently been used to explain spatial constancy during eye and head movements. We have found that its basic concepts also form part of the information processing necessary for the control and recalibration of reaching movements. Reaching was studied in a novel force environment–a rotating room that creates centripetal forces of the type that could someday substitute for gravity in space flight, and Coriolis forces which are side effects of rotation. We found that inertial, noncontacting Coriolis forces deviate the path and endpoint of reaching movements, a finding that shows the inadequacy of equilibrium position models of movement control. Repeated movements in the rotating room quickly lead to normal movement patterns and to a failure to perceive the perturbing forces. The first movements made after rotation stops, without Coriolis forces present, show mirror-image deviations and evoke perception of a perturbing force even though none is present. These patterns of sensorimotor control and adaptation can largely be explained on the basis of comparisons of efference copy, reafferent muscle spindle, and cutaneous mechanoreceptor signals. We also describe experiments on human iocomotion using an apparatus similar to that which Mittelstaedt used to study the optomotor response of the Eristalis fly. These results show that the reafference principle relates as well to the perception of the forces acting on and exerted by the body during voluntary locomotion.


2020 ◽  
Author(s):  
Nguyen Nguyen ◽  
Kyu-Sung Kim ◽  
Gyutae Kim

Abstract Background: Due to the paired structure of two labyrinths, their neural communication is conducted through the interconnected commissural pathway. Using the tight link, the neural responding characteristics are formed in vestibular nucleus, and these responses are initially generated by the mechanical movement of the hair cells in the semicircular canals and otoliths. Although the mechanism to describe the neuronal responses to the head movements was evident, few direct experimental data were provided, especially the directional preference of otolith-related neurons as one of critical responses to elucidate the function of the neurons in vestibular nucleus (VN). Experimental Approach: The directional preference of otolith-related neurons was investigated in VN. Also, a chemically induced unilateral labyrinthectomy (UL) was performed to identify the origin of the directional preference. For the model evaluation, static and dynamic behavioral tests were performed. Following the evaluation, an extracellular neural activity was recorded for the neuronal responses to the horizontal head rotation and the linear head translation. Results: Seventy seven neuronal activities were recorded from thirty SD rats (270-450 g, male), and total population was divided into three groups; left UL (20), sham (35), right UL (22). Based on the directional preference, two sub-groups were again classified as contra- and ipsi-preferred neurons. There was no significance in the number of those sub-groups (contra-: 15/35, 43%; ipsi-: 20/35, 57%) in the sham (p=0.155). However, more ipsi-preferred neurons (19/22, 86%) were observed after right UL (p=6.056×10-5) while left UL caused more contra-preferred neurons (13/20, 65%) (p=0.058). In particular, the convergent neurons mainly led this biased difference in the population (ipsi-: 100% after right UL & contra-: 89% after left UL) (p<0.002). Conclusion: The directional preference was evenly maintained under a normal vestibular function, and its unilateral loss biased the directional preference of the neurons, depending on the side of lesion. Moreover, the dominance of the directional preference was mainly led by the convergent neurons which had the neural information related with head rotation and linear translation.


Jurnal INFORM ◽  
2020 ◽  
Vol 5 (1) ◽  
pp. 32-38
Author(s):  
Heri Suharyadi ◽  
Dian Ahkam Sani ◽  
Mohammad Zoqi Sarwani

The realization of virtual reality is implemented on an application in the form of a game that will load the genre of platform games, where the genre of this game is highly respected by lovers of games on the Android platform. The hardware used to create virtual reality in this platformer game is an android device that has a gyroscope sensor, virtual reality cardboard, and computer devices. The software uses unity, blender, and Microsoft visual studio as a code editor. The making of this virtual reality game using the gyroscope sensor as full control of character movements that serve to minimize the device needed to play virtual reality games on Android to make it easier and more practical to play. The test results state that virtual reality can be implemented on Android games and the gyroscope sensor functions well as the main movement control in the game


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1448 ◽  
Author(s):  
Youngwon Ryan Kim ◽  
Hyeonah Choi ◽  
Minwook Chang ◽  
Gerard J. Kim

Recently, a new breed of mobile virtual reality (dubbed as “EasyVR” in this work), has appeared in the form of conveniently clipping on a non-isolating magnifying lenses on the smartphone, still offering a reasonable level of immersion to using the isolated headset. Furthermore, such a form factor allows the fingers to touch the screen and select objects quite accurately, despite the finger(s) being seen unfocused over the lenses. Many navigation techniques have existed for both casual smartphone 3D applications using the touchscreen and immersive VR environments using the various controllers/sensors. However, no research has focused on the proper navigation interaction technique for a platform like EasyVR which necessitates the use of the touchscreen while holding the display device to the head and looking through the magnifying lenses. To design and propose the most fitting navigation method(s) with EasyVR, we mixed and matched the conventional touchscreen based and headset oriented navigation methods to come up with six viable navigation techniques—more specifically for selecting the travel direction and invoking the movement itself—including the use of head-rotation, on-screen keypads/buttons, one-touch teleport, drag-to-target, and finger gestures. These methods were experimentally compared for their basic usability and the level of immersion in navigating in 3D space with six degrees of freedom. The results provide a valuable guideline for designing/choosing the proper navigation method under different navigational needs of the given VR application.


Sign in / Sign up

Export Citation Format

Share Document