scholarly journals A Comparison of Eye Tracking Latencies Among Several Commercial Head-Mounted Displays

i-Perception ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 204166952098333 ◽  
Author(s):  
Niklas Stein ◽  
Diederick C. Niehorster ◽  
Tamara Watson ◽  
Frank Steinicke ◽  
Katharina Rifai ◽  
...  

A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4956
Author(s):  
Jose Llanes-Jurado ◽  
Javier Marín-Morales ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms


2017 ◽  
Vol 10 (5) ◽  
Author(s):  
Thorsten Roth ◽  
Martin Weier ◽  
André Hinkenjann ◽  
Yongmin Li ◽  
Philipp Slusallek

This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated ren- dering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving tar- gets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings.


Author(s):  
Seok Lee ◽  
Juyong Park ◽  
Dongkyung Nam

In this article, the authors present an image processing method to reduce three-dimensional (3D) crosstalk for eye-tracking-based 3D display. Specifically, they considered 3D pixel crosstalk and offset crosstalk and applied different approaches based on its characteristics. For 3D pixel crosstalk which depends on the viewer’s relative location, they proposed output pixel value weighting scheme based on viewer’s eye position, and for offset crosstalk they subtracted luminance of crosstalk components according to the measured display crosstalk level in advance. By simulations and experiments using the 3D display prototypes, the authors evaluated the effectiveness of proposed method.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2021 ◽  
Author(s):  
Polona Caserman ◽  
Augusto Garcia-Agundez ◽  
Alvar Gámez Zerban ◽  
Stefan Göbel

AbstractCybersickness (CS) is a term used to refer to symptoms, such as nausea, headache, and dizziness that users experience during or after virtual reality immersion. Initially discovered in flight simulators, commercial virtual reality (VR) head-mounted displays (HMD) of the current generation also seem to cause CS, albeit in a different manner and severity. The goal of this work is to summarize recent literature on CS with modern HMDs, to determine the specificities and profile of immersive VR-caused CS, and to provide an outlook for future research areas. A systematic review was performed on the databases IEEE Xplore, PubMed, ACM, and Scopus from 2013 to 2019 and 49 publications were selected. A summarized text states how different VR HMDs impact CS, how the nature of movement in VR HMDs contributes to CS, and how we can use biosensors to detect CS. The results of the meta-analysis show that although current-generation VR HMDs cause significantly less CS ($$p<0.001$$ p < 0.001 ), some symptoms remain as intense. Further results show that the nature of movement and, in particular, sensory mismatch as well as perceived motion have been the leading cause of CS. We suggest an outlook on future research, including the use of galvanic skin response to evaluate CS in combination with the golden standard (Simulator Sickness Questionnaire, SSQ) as well as an update on the subjective evaluation scores of the SSQ.


2021 ◽  
pp. 1-16
Author(s):  
Leigha A. MacNeill ◽  
Xiaoxue Fu ◽  
Kristin A. Buss ◽  
Koraly Pérez-Edgar

Abstract Temperamental behavioral inhibition (BI) is a robust endophenotype for anxiety characterized by increased sensitivity to novelty. Controlling parenting can reinforce children's wariness by rewarding signs of distress. Fine-grained, dynamic measures are needed to better understand both how children perceive their parent's behaviors and the mechanisms supporting evident relations between parenting and socioemotional functioning. The current study examined dyadic attractor patterns (average mean durations) with state space grids, using children's attention patterns (captured via mobile eye tracking) and parental behavior (positive reinforcement, teaching, directives, intrusion), as functions of child BI and parent anxiety. Forty 5- to 7-year-old children and their primary caregivers completed a set of challenging puzzles, during which the child wore a head-mounted eye tracker. Child BI was positively correlated with proportion of parent's time spent teaching. Child age was negatively related, and parent anxiety level was positively related, to parent-focused/controlling parenting attractor strength. There was a significant interaction between parent anxiety level and child age predicting parent-focused/controlling parenting attractor strength. This study is a first step to examining the co-occurrence of parenting behavior and child attention in the context of child BI and parental anxiety levels.


Author(s):  
Ana Guerberof Arenas ◽  
Joss Moorkens ◽  
Sharon O’Brien

AbstractThis paper presents results of the effect of different translation modalities on users when working with the Microsoft Word user interface. An experimental study was set up with 84 Japanese, German, Spanish, and English native speakers working with Microsoft Word in three modalities: the published translated version, a machine translated (MT) version (with unedited MT strings incorporated into the MS Word interface) and the published English version. An eye-tracker measured the cognitive load and usability according to the ISO/TR 16982 guidelines: i.e., effectiveness, efficiency, and satisfaction followed by retrospective think-aloud protocol. The results show that the users’ effectiveness (number of tasks completed) does not significantly differ due to the translation modality. However, their efficiency (time for task completion) and self-reported satisfaction are significantly higher when working with the released product as opposed to the unedited MT version, especially when participants are less experienced. The eye-tracking results show that users experience a higher cognitive load when working with MT and with the human-translated versions as opposed to the English original. The results suggest that language and translation modality play a significant role in the usability of software products whether users complete the given tasks or not and even if they are unaware that MT was used to translate the interface.


Sign in / Sign up

Export Citation Format

Share Document