The Suggestion for the Design of Eye Tracker to Promote the Study on the Gaze Tracking Interface

2017 ◽  
Vol 50 (1) ◽  
pp. 145-152
Author(s):  
Eun Sun Seo, ◽  
Keyword(s):  
Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1896
Author(s):  
Jeong-Sik Kim ◽  
Won-Been Jeong ◽  
Byeong Hun An ◽  
Seung-Woo Lee

Here, we study a low-power technique for displays based on gaze tracking, called peripheral dimming. In this work, the threshold levels of the lightness reduction ratio (LRR), where people notice differences in brightness, depending on gaze positions and image brightness, are investigated. A psychophysical experiment with five gaze positions and three image brightness conditions is performed, and the estimated threshold levels are obtained. To investigate the significance of the differences between the threshold levels, the overlap method and the Bayesian estimation (BEST) analysis are performed. The analysis results show that the difference of the threshold levels depending on the conditions is insignificant. Thus, the proposed technique can operate with a constant LRR level, regardless of the gaze position or image brightness, while maintaining the perceptual image quality. In addition, the proposed technique reduces the power consumption of virtual reality (VR) displays by 12–14% on average. We believe that the peripheral dimming technique would contribute to reducing the power of the self-luminous displays used for VR headsets with an integrated eye tracker.


Author(s):  
Dan Witzner Hansen ◽  
Fiona Mulvey ◽  
Diako Mardanbegi

Eye and gaze tracking have a long history but there is still plenty of room for further development. In this concluding chapter for Section 6, we consider future perspectives for the development of eye and gaze tracking.


2020 ◽  
Vol 2020 (11) ◽  
pp. 129-1-129-10
Author(s):  
William Andrew Blakey ◽  
Stamos Katsigiannis ◽  
Navid Hajimirza ◽  
Naeem Ramzan

This work examines the different terminology used for defining gaze tracking technology and explores the different methodologies used for describing their respective accuracy. Through a comparative study of different gaze tracking technologies, such as infrared and webcam-based, and utilising a variety of accuracy metrics, this work shows how the reported accuracy can be misleading. The lack of intersection points between the gaze vectors of different eyes (also known as convergence points) in definitions has a huge impact on accuracy measures and directly impacts the robustness of any accuracy measuring methodology. Different accuracy metrics and tracking definitions have been collected and tabulated to more formally demonstrate the divide in definitions.


10.2196/13810 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13810 ◽  
Author(s):  
Anish Nag ◽  
Nick Haber ◽  
Catalin Voss ◽  
Serena Tamura ◽  
Jena Daniels ◽  
...  

Background Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. Objective This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). Methods We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision–enabled methods for pupil tracking and world–gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. Results Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. Conclusions Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.


2022 ◽  
Vol 132 ◽  
pp. 01017
Author(s):  
Sangjip Ha ◽  
Eun-ju Yi ◽  
In-jin Yoo ◽  
Do-Hyung Park

This study intends to utilize eye tracking for the appearance of a robot, which is one of the trends in social robot design research. We suggest a research model with the entire stage from the consumer gaze response to the perceived consumer beliefs and further their attitudes toward social robots. Specifically, the eye tracking indicators used in this study are Fixation, First Visit, Total Viewed Stay Time, and Number of Revisits. Also, Areas of Interest are selected to the face, eyes, lips, and full-body of a social robot. In the first relationship, we check which element of the social robot design the consumer’s gaze stays on, and how the gaze on each element affects consumer beliefs. The consumer beliefs are considered as the social robot’s emotional expression, humanness, and facial prominence. Second, we explore whether the formation of consumer attitudes is possible through two major channels. One is the path that the consumer beliefs formed through the gaze influence their attitude, and the other is the path that the consumer gaze response directly influences the attitude. This study made a theoretical contribution in that it finally analysed the path of consumer attitude formation from various angles by linking the gaze tracking reaction and consumer perception. In addition, it is expected to make practical contributions in the suggestion of specific design insights that can be used as a reference for designing social robots.


2019 ◽  
Vol 2 (2) ◽  
pp. 111-119 ◽  
Author(s):  
Ryo Asaoka ◽  
Yuri Fujino ◽  
Shuichiro Aoki ◽  
Masato Matsuura ◽  
Hiroshi Murata

2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Andrzej Czyżewski ◽  
Adam Kurowski ◽  
Piotr Odya ◽  
Piotr Szczuko

Abstract Background A lack of communication with people suffering from acquired brain injuries may lead to drawing erroneous conclusions regarding the diagnosis or therapy of patients. Information technology and neuroscience make it possible to enhance the diagnostic and rehabilitation process of patients with traumatic brain injury or post-hypoxia. In this paper, we present a new method for evaluation possibility of communication and the assessment of such patients’ state employing future generation computers extended with advanced human–machine interfaces. Methods First, the hearing abilities of 33 participants in the state of coma were evaluated using auditory brainstem response measurements (ABR). Next, a series of interactive computer-based exercise sessions were performed with the therapist’s assistance. Participants’ actions were monitored with an eye-gaze tracking (EGT) device and with an electroencephalogram EEG monitoring headset. The data gathered were processed with the use of data clustering techniques. Results Analysis showed that the data gathered and the computer-based methods developed for their processing are suitable for evaluating the participants’ responses to stimuli. Parameters obtained from EEG signals and eye-tracker data were correlated with Glasgow Coma Scale (GCS) scores and enabled separation between GCS-related classes. The results show that in the EEG and eye-tracker signals, there are specific consciousness-related states discoverable. We observe them as outliers in diagrams on the decision space generated by the autoencoder. For this reason, the numerical variable that separates particular groups of people with the same GCS is the variance of the distance of points from the cluster center that the autoencoder generates. The higher the GCS score, the greater the variance in most cases. The results proved to be statistically significant in this context. Conclusions The results indicate that the method proposed may help to assess the consciousness state of participants in an objective manner.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2894 ◽  
Author(s):  
Zhuo Ma ◽  
Xinglong Wang ◽  
Ruijie Ma ◽  
Zhuzhu Wang ◽  
Jianfeng Ma

We introduce a two-stream model to use reflexive eye movements for smart mobile device authentication. Our model is based on two pre-trained neural networks, iTracker and PredNet, targeting two independent tasks: (i) gaze tracking and (ii) future frame prediction. We design a procedure to randomly generate the visual stimulus on the screen of mobile device, and the frontal camera will simultaneously capture head motions of the user as one watches it. Then, iTracker calculates the gaze-coordinates error which is treated as a static feature. To solve the imprecise gaze-coordinates caused by the low resolution of the frontal camera, we further take advantage of PredNet to extract the dynamic features between consecutive frames. In order to resist traditional attacks (shoulder surfing and impersonation attacks) during the procedure of mobile device authentication, we innovatively combine static features and dynamic features to train a 2-class support vector machine (SVM) classifier. The experiment results show that the classifier achieves accuracy of 98.6% to authenticate the user identity of mobile devices.


2006 ◽  
Vol 5 (3) ◽  
pp. 41-45 ◽  
Author(s):  
Yong-Moo Kwon ◽  
Kyeong-Won Jeon ◽  
Jeongseok Ki ◽  
Qonita M. Shahab ◽  
Sangwoo Jo ◽  
...  

There are several researches on 2D gaze tracking techniques to the 2D screen for the Human-Computer Interaction. However, the researches for the gaze-based interaction to the stereo images or 3D contents are not reported. The stereo display techniques are emerging now for the reality service. Moreover, the 3D interaction techniques are needed in the 3D contents service environments. This paper presents 3D gaze estimation technique and its application to gaze-based interaction in the parallax barrier stereo display


Sign in / Sign up

Export Citation Format

Share Document