User Ranking by Monitoring Eye Gaze Using Eye Tracker

Author(s):  
Chandan Singh ◽  
Dhananjay Yadav
Keyword(s):  
Eye Gaze ◽  
Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


2020 ◽  
Vol 10 (5) ◽  
pp. 1668 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Adithya Balasubramanyam ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Over the years, gaze input modality has been an easy and demanding human–computer interaction (HCI) method for various applications. The research of gaze-based interactive applications has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze of a remote user. The video stream captured by the camera is fed into a head-mounted display (HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver the camera. A user study was conducted considering the static and moving targets of interest in a three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a state-of-the-art input modality remote controller. The qualitative and quantitative results showed that the proposed GazeGuide performed significantly better than the remote controller.


Open Physics ◽  
2019 ◽  
Vol 17 (1) ◽  
pp. 512-518
Author(s):  
Anna Rogalska ◽  
Filip Rynkiewicz ◽  
Marcin Daszuta ◽  
Krzysztof Guzek ◽  
Piotr Napieralski

Abstract The aim of this paper is to present methods for human eye blink recognition. The main function of blinking is to spread tears across the eye and remove irratants from the surface of the cornea and conjuctiva. Blinking can be associated with internal memory processing, fatigue or activation in central nervous system. There are currently many methods for automatic blink detection. The most reliable methods include EOG or EEG signals. These methods, however, are associated with a decrease in the comfort of the examined person. This paper presents a method to detect blinks with the eye-tracker device. There are currently many blink detection methods for this devices. Two popular eye-trackers were tested in this paper. In addition a method for improving detection efficiency was proposed.


10.2196/13810 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13810 ◽  
Author(s):  
Anish Nag ◽  
Nick Haber ◽  
Catalin Voss ◽  
Serena Tamura ◽  
Jena Daniels ◽  
...  

Background Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. Objective This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). Methods We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision–enabled methods for pupil tracking and world–gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. Results Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. Conclusions Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.


2019 ◽  
Vol 11 (7) ◽  
pp. 143
Author(s):  
Tanaka ◽  
Takenouchi ◽  
Ogawa ◽  
Yoshikawa ◽  
Nishio ◽  
...  

In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Andrzej Czyżewski ◽  
Adam Kurowski ◽  
Piotr Odya ◽  
Piotr Szczuko

Abstract Background A lack of communication with people suffering from acquired brain injuries may lead to drawing erroneous conclusions regarding the diagnosis or therapy of patients. Information technology and neuroscience make it possible to enhance the diagnostic and rehabilitation process of patients with traumatic brain injury or post-hypoxia. In this paper, we present a new method for evaluation possibility of communication and the assessment of such patients’ state employing future generation computers extended with advanced human–machine interfaces. Methods First, the hearing abilities of 33 participants in the state of coma were evaluated using auditory brainstem response measurements (ABR). Next, a series of interactive computer-based exercise sessions were performed with the therapist’s assistance. Participants’ actions were monitored with an eye-gaze tracking (EGT) device and with an electroencephalogram EEG monitoring headset. The data gathered were processed with the use of data clustering techniques. Results Analysis showed that the data gathered and the computer-based methods developed for their processing are suitable for evaluating the participants’ responses to stimuli. Parameters obtained from EEG signals and eye-tracker data were correlated with Glasgow Coma Scale (GCS) scores and enabled separation between GCS-related classes. The results show that in the EEG and eye-tracker signals, there are specific consciousness-related states discoverable. We observe them as outliers in diagrams on the decision space generated by the autoencoder. For this reason, the numerical variable that separates particular groups of people with the same GCS is the variance of the distance of points from the cluster center that the autoencoder generates. The higher the GCS score, the greater the variance in most cases. The results proved to be statistically significant in this context. Conclusions The results indicate that the method proposed may help to assess the consciousness state of participants in an objective manner.


2013 ◽  
Vol 1 (1) ◽  
pp. T45-T55 ◽  
Author(s):  
Yathunanthan Sivarajah ◽  
Eun-Jung Holden ◽  
Roberto Togneri ◽  
Michael Dentith

Geoscientific data interpretation is a highly subjective and complex task because human intuition and biases play a significant role. Based on these interpretations, however, the mining and petroleum industries make decisions with paramount financial and environmental implications. To improve the accuracy and efficacy of these interpretations, it is important to better understand the interpretation process and the impact of different interpretation techniques, including data processing and display methods. As a first step toward this goal, we aim to quantitatively analyze the variability in geophysical data interpretation between and within individuals. We carried out an experiment to analyze how individuals interact with magnetic data during the process of identifying prescribed targets. Participants performed two target spotting exercises where the same magnetic image was presented at different orientations. The task was to identify the magnetic response from porphyry-style intrusive systems. The experiment involved analyzing the data observation pattern during the interpretation process using an eye tracker system that captures the interpreter’s eye gaze motion and the target-spotting performance. The time at which targets were identified was also recorded. Fourteen participants with varying degrees of experience and expertise participated in this study. The results show inconsistencies within and between the interpreters in target-spotting performance. The results show a correlation between a systematic data observation pattern and target-spotting performance. Improved target-spotting performance was obtained when the magnetic image was observed from multiple orientations. These findings will help to identify and quantify the effective interpretation practices, which can provide a roadmap for the training of geoscientific data interpreters and contribute toward the understanding of the uncertainties in the data interpretation process.


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Damla Topalli ◽  
Nergiz Ercil Cagiltay

Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objective metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants’ eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants’ eye-hand coordination skills are analyzed. The results indicate higher correlations in the intermediates’ eye-hand movements compared to the novices. An increase in intermediates’ visual concentration leads to smoother hand movements. Similarly, the novices’ hand movements are shown to remain at a standstill. After the first round of practice, all participants’ eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees’ eye-hand coordination skills and help instructional system designers to better address training requirements.


Author(s):  
Tahira N. Reid ◽  
Erin F. MacDonald ◽  
Ping Du

Researchers often use simplified product form representations, such as silhouettes, sketches, and other two-dimensional representations of products, to examine customer preferences. While these simplified representations make the analysis procedure tractable, for example linking certain design manipulations to certain preferences, the reality is that people evaluate more sophisticated product representations during purchase decisions. This paper presents the results of a study where two groups of people were shown either computer sketches and front/side view (FSV) silhouettes or simplified renderings and realistic renderings of cars and coffee carafes. Human judgments measured included opinions, objective evaluations, and inferences. Results show a variety of phenomena including preference inconsistences and ordering effects. Data collected from an eye-tracker help to elucidate these findings.


Sign in / Sign up

Export Citation Format

Share Document