eye movement tracking
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 21)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 3 (4) ◽  
pp. 336-346
Author(s):  
Judy Simon

Human Computer Interface (HCI) requires proper coordination and definition of features that serve as input to the system. The parameters of a saccadic and smooth eye movement tracking are observed and a comparison is drawn for HCI. This methodology is further incorporated with Pupil, OpenCV and Microsoft Visual Studio for image processing to identify the position of the pupil and observe the pupil movement direction in real-time. Once the direction is identified, it is possible to determine the accurate cruise position which moves towards the target. To quantify the differences between the step-change tracking of saccadic eye movement and incremental tracking of smooth eye movement, the test was conducted on two users. With the help of incremental tracking of smooth eye movement, an accuracy of 90% is achieved. It is found that the incremental tracking requires an average time of 7.21s while the time for step change tracking is just 2.82s. Based on the observations, it is determined that, when compared to the saccadic eye movement tracking, the smooth eye movement tracking is over four times more accurate. Therefore, the smooth eye tracking was found to be more accurate, precise, reliable, and predictable to use with the mouse cursor than the saccadic eye movement tracking.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yushou Tang ◽  
Jianhuan Su

This paper uses adaptive BP neural networks to conduct an in-depth examination of eye movements during reading and to predict reading effects. An important component for the implementation of visual tracking systems is the correct detection of eye movement using the actual data or real-world datasets. We propose the identification of three typical types of eye movements, namely, gaze, leap, and smooth navigation, using an adaptive BP neural network-based recognition algorithm for eye movement. This study assesses the BP neural network algorithm using the eye movement tracking sensors. For the experimental environment, four types of eye movement signals were acquired from 10 subjects to perform preliminary processing of the acquired signals. The experimental results demonstrate that the recognition rate of the algorithm provided in this paper can reach up to 97%, which is superior to the commonly used CNN algorithm.


Author(s):  
XIAOWEI WANG ◽  
XIAOXU GENG ◽  
JINKE WANG ◽  
SHINICHI TAMURA

Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.


2021 ◽  
Vol 1802 (4) ◽  
pp. 042066
Author(s):  
Zhaowei Li ◽  
Peiyuan Guo ◽  
Chen Song

2021 ◽  
Vol 58 (2) ◽  
pp. 103415
Author(s):  
Jia-Qiong Xie ◽  
Detlef H. Rost ◽  
Fu-Xing Wang ◽  
Jin-Liang Wang ◽  
Rebecca L. Monk

2021 ◽  
Author(s):  
Isabelle Bülthoff ◽  
Mintao Zhao

Many face recognition studies use average faces as a theoretical concept (e.g., face norm) and/or a research tool (e.g., for face morphing), nonetheless, how the averaging process—using an increasing number of faces to create an average face—changes the resulting averaged faces and how our visual system perceives these faces remain unclear. Here we aimed to address these questions by combining 3D-face averaging, eye movement tracking, and the computation of image-based face similarity. Our results show that average faces created with an increasing number of “parent” faces become increasingly more similar to each other. Participants’ ability to discriminate between two average faces dropped from near-ceiling level (when comparing two average faces created each from two-parent faces) to chance level (when the faces to compare were created out of 80 faces each). The non-linear relation between face similarity and participants’ face discrimination performance was captured nearly perfectly with an exponential function. This finding suggests that the relationship between physical and perceived face similarity follows a Fechner law. Eye-tracking revealed that when the comparison task became more challenging, participants performed more fixations onto the faces. Nonetheless, the distribution of fixations across core facial features (eyes, nose, mouth, and center area of a face) remained unchanged, irrespective of task difficulty. These results not only provide a long-needed benchmark for the theoretical characterization and empirical use of average faces, but also set new constraints on the understanding of how faces are encoded, stored, categorized and identified using a modernized face space metaphor.


2020 ◽  
pp. 1-9
Author(s):  
Aleks Stolicyn ◽  
J. Douglas Steele ◽  
Peggy Seriès

Abstract Background Depression is a challenge to diagnose reliably and the current gold standard for trials of DSM-5 has been in agreement between two or more medical specialists. Research studies aiming to objectively predict depression have typically used brain scanning. Less expensive methods from cognitive neuroscience may allow quicker and more reliable diagnoses, and contribute to reducing the costs of managing the condition. In the current study we aimed to develop a novel inexpensive system for detecting elevated symptoms of depression based on tracking face and eye movements during the performance of cognitive tasks. Methods In total, 75 participants performed two novel cognitive tasks with verbal affective distraction elements while their face and eye movements were recorded using inexpensive cameras. Data from 48 participants (mean age 25.5 years, standard deviation of 6.1 years, 25 with elevated symptoms of depression) passed quality control and were included in a case-control classification analysis with machine learning. Results Classification accuracy using cross-validation (within-study replication) reached 79% (sensitivity 76%, specificity 82%), when face and eye movement measures were combined. Symptomatic participants were characterised by less intense mouth and eyelid movements during different stages of the two tasks, and by differences in frequencies and durations of fixations on affectively salient distraction words. Conclusions Elevated symptoms of depression can be detected with face and eye movement tracking during the cognitive performance, with a close to clinically-relevant accuracy (~80%). Future studies should validate these results in larger samples and in clinical populations.


Sign in / Sign up

Export Citation Format

Share Document