gaze tracking
Recently Published Documents


TOTAL DOCUMENTS

673
(FIVE YEARS 173)

H-INDEX

32
(FIVE YEARS 4)

2022 ◽  
Vol 24 (3) ◽  
pp. 1-18
Author(s):  
Neeru Dubey ◽  
Amit Arjun Verma ◽  
Simran Setia ◽  
S. R. S. Iyengar

The size of Wikipedia grows exponentially every year, due to which users face the problem of information overload. We purpose a remedy to this problem by developing a recommendation system for Wikipedia articles. The proposed technique automatically generates a personalized synopsis of the article that a user aims to read next. We develop a tool, called PerSummRe, which learns the reading preferences of a user through a vision-based analysis of his/her past reads. We use an ensemble non-invasive eye gaze tracking technique to analyze user’s reading pattern. This tool performs user profiling and generates a recommended personalized summary of yet unread Wikipedia article for a user. Experimental results showcase the efficiency of the recommendation technique.


2022 ◽  
Vol 24 (3) ◽  
pp. 0-0

The size of Wikipedia grows exponentially every year, due to which users face the problem of information overload. We purpose a remedy to this problem by developing a recommendation system for Wikipedia articles. The proposed technique automatically generates a personalized synopsis of the article that a user aims to read next. We develop a tool, called PerSummRe, which learns the reading preferences of a user through a vision-based analysis of his/her past reads. We use an ensemble non-invasive eye gaze tracking technique to analyze user’s reading pattern. This tool performs user profiling and generates a recommended personalized summary of yet unread Wikipedia article for a user. Experimental results showcase the efficiency of the recommendation technique.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Jan Cimbalnik ◽  
Jaromir Dolezal ◽  
Çağdaş Topçu ◽  
Michal Lech ◽  
Victoria S. Marks ◽  
...  

AbstractData comprise intracranial EEG (iEEG) brain activity represented by stereo EEG (sEEG) signals, recorded from over 100 electrode channels implanted in any one patient across various brain regions. The iEEG signals were recorded in epilepsy patients (N = 10) undergoing invasive monitoring and localization of seizures when they were performing a battery of four memory tasks lasting approx. 1 hour in total. Gaze tracking on the task computer screen with estimating the pupil size was also recorded together with behavioral performance. Each dataset comes from one patient with anatomical localization of each electrode contact. Metadata contains labels for the recording channels with behavioral events marked from all tasks, including timing of correct and incorrect vocalization of the remembered stimuli. The iEEG and the pupillometric signals are saved in BIDS data structure to facilitate efficient data sharing and analysis.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 545
Author(s):  
Bor-Jiunn Hwang ◽  
Hui-Hui Chen ◽  
Chaur-Heh Hsieh ◽  
Deng-Yu Huang

Based on experimental observations, there is a correlation between time and consecutive gaze positions in visual behaviors. Previous studies on gaze point estimation usually use images as the input for model trainings without taking into account the sequence relationship between image data. In addition to the spatial features, the temporal features are considered to improve the accuracy in this paper by using videos instead of images as the input data. To be able to capture spatial and temporal features at the same time, the convolutional neural network (CNN) and long short-term memory (LSTM) network are introduced to build a training model. In this way, CNN is used to extract the spatial features, and LSTM correlates temporal features. This paper presents a CNN Concatenating LSTM network (CCLN) that concatenates spatial and temporal features to improve the performance of gaze estimation in the case of time-series videos as the input training data. In addition, the proposed model can be optimized by exploring the numbers of LSTM layers, the influence of batch normalization (BN) and global average pooling layer (GAP) on CCLN. It is generally believed that larger amounts of training data will lead to better models. To provide data for training and prediction, we propose a method for constructing datasets of video for gaze point estimation. The issues are studied, including the effectiveness of different commonly used general models and the impact of transfer learning. Through exhaustive evaluation, it has been proved that the proposed method achieves a better prediction accuracy than the existing CNN-based methods. Finally, 93.1% of the best model and 92.6% of the general model MobileNet are obtained.


2022 ◽  
Vol 34 (1) ◽  
pp. 36-39
Author(s):  
Risa Suzuki ◽  
Yasunari Kurita

2022 ◽  
Vol 132 ◽  
pp. 01017
Author(s):  
Sangjip Ha ◽  
Eun-ju Yi ◽  
In-jin Yoo ◽  
Do-Hyung Park

This study intends to utilize eye tracking for the appearance of a robot, which is one of the trends in social robot design research. We suggest a research model with the entire stage from the consumer gaze response to the perceived consumer beliefs and further their attitudes toward social robots. Specifically, the eye tracking indicators used in this study are Fixation, First Visit, Total Viewed Stay Time, and Number of Revisits. Also, Areas of Interest are selected to the face, eyes, lips, and full-body of a social robot. In the first relationship, we check which element of the social robot design the consumer’s gaze stays on, and how the gaze on each element affects consumer beliefs. The consumer beliefs are considered as the social robot’s emotional expression, humanness, and facial prominence. Second, we explore whether the formation of consumer attitudes is possible through two major channels. One is the path that the consumer beliefs formed through the gaze influence their attitude, and the other is the path that the consumer gaze response directly influences the attitude. This study made a theoretical contribution in that it finally analysed the path of consumer attitude formation from various angles by linking the gaze tracking reaction and consumer perception. In addition, it is expected to make practical contributions in the suggestion of specific design insights that can be used as a reference for designing social robots.


2021 ◽  
Vol 30 (6) ◽  
pp. 829-836
Author(s):  
Jong-Geun Kim ◽  
Jun-Young Ok ◽  
Jae-Ho Han ◽  
Seok-Jae Lee
Keyword(s):  

Author(s):  
Sinh Huynh ◽  
Rajesh Krishna Balan ◽  
JeongGil Ko

Gaze tracking is a key building block used in many mobile applications including entertainment, personal productivity, accessibility, medical diagnosis, and visual attention monitoring. In this paper, we present iMon, an appearance-based gaze tracking system that is both designed for use on mobile phones and has significantly greater accuracy compared to prior state-of-the-art solutions. iMon achieves this by comprehensively considering the gaze estimation pipeline and then overcoming three different sources of errors. First, instead of assuming that the user's gaze is fixed to a single 2D coordinate, we construct each gaze label using a probabilistic 2D heatmap gaze representation input to overcome errors caused by microsaccade eye motions that cause the exact gaze point to be uncertain. Second, we design an image enhancement model to refine visual details and remove motion blur effects of input eye images. Finally, we apply a calibration scheme to correct for differences between the perceived and actual gaze points caused by individual Kappa angle differences. With all these improvements, iMon achieves a person-independent per-frame tracking error of 1.49 cm (on smartphones) and 1.94 cm (on tablets) when tested with the GazeCapture dataset and 2.01 cm with the TabletGaze dataset. This outperforms the previous state-of-the-art solutions by ~22% to 28%. By averaging multiple per-frame estimations that belong to the same fixation point and applying personal calibration, the tracking error is further reduced to 1.11 cm (smartphones) and 1.59 cm (tablets). Finally, we built implementations that run on an iPhone 12 Pro and show that our mobile implementation of iMon can run at up to 60 frames per second - thus making gaze-based control of applications possible.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3165
Author(s):  
Ibrahim Shehi Shehu ◽  
Yafei Wang ◽  
Athuman Mohamed Athuman ◽  
Xianping Fu

Several decades of eye related research has shown how valuable eye gaze data are for applications that are essential to human daily life. Eye gaze data in a broad sense has been used in research and systems for eye movements, eye tracking, and eye gaze tracking. Since early 2000, eye gaze tracking systems have emerged as interactive gaze-based systems that could be remotely deployed and operated, known as remote eye gaze tracking (REGT) systems. The drop point of visual attention known as point of gaze (PoG), and the direction of visual attention known as line of sight (LoS), are important tasks of REGT systems. In this paper, we present a comparative evaluation of REGT systems intended for the PoG and LoS estimation tasks regarding past to recent progress. Our literature evaluation presents promising insights on key concepts and changes recorded over time in hardware setup, software process, application, and deployment of REGT systems. In addition, we present current issues in REGT research for future attempts.


2021 ◽  
Vol 2120 (1) ◽  
pp. 012030
Author(s):  
J K Tan ◽  
W J Chew ◽  
S K Phang

Abstract The field of Human-Computer Interaction (HCI) has been developing tremendously since the past decade. The existence of smartphones or modern computers is already a norm in society these days which utilizes touch, voice and typing as a means for input. To further increase the variety of interaction, human eyes are set to be a good candidate for another form of HCI. The amount of information which the human eyes contain are extremely useful, hence, various methods and algorithm for eye gaze tracking are implemented in multiple sectors. However, some eye-tracking method requires infrared rays to be projected into the eye of the user which could potentially cause enzyme denaturation when the eye is subjected to those rays under extreme exposure. Therefore, to avoid potential harm from the eye-tracking method that utilizes infrared rays, this paper proposes an image-based eye tracking system using the Viola-Jones algorithm and Circular Hough Transform (CHT) algorithm. The proposed method uses visible light instead of infrared rays to control the mouse pointer using the eye gaze of the user. This research aims to implement the proposed algorithm for people with hand disability to interact with computers using their eye gaze.


Sign in / Sign up

Export Citation Format

Share Document