scholarly journals Tracking eye-gaze in smart home systems (SHS): first insights from eye-tracking and self-report measures

Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.

Author(s):  
Alexander L. Anwyl-Irvine ◽  
Thomas Armstrong ◽  
Edwin S. Dalmaijer

AbstractPsychological research is increasingly moving online, where web-based studies allow for data collection at scale. Behavioural researchers are well supported by existing tools for participant recruitment, and for building and running experiments with decent timing. However, not all techniques are portable to the Internet: While eye tracking works in tightly controlled lab conditions, webcam-based eye tracking suffers from high attrition and poorer quality due to basic limitations like webcam availability, poor image quality, and reflections on glasses and the cornea. Here we present MouseView.js, an alternative to eye tracking that can be employed in web-based research. Inspired by the visual system, MouseView.js blurs the display to mimic peripheral vision, but allows participants to move a sharp aperture that is roughly the size of the fovea. Like eye gaze, the aperture can be directed to fixate on stimuli of interest. We validated MouseView.js in an online replication (N = 165) of an established free viewing task (N = 83 existing eye-tracking datasets), and in an in-lab direct comparison with eye tracking in the same participants (N = 50). Mouseview.js proved as reliable as gaze, and produced the same pattern of dwell time results. In addition, dwell time differences from MouseView.js and from eye tracking correlated highly, and related to self-report measures in similar ways. The tool is open-source, implemented in JavaScript, and usable as a standalone library, or within Gorilla, jsPsych, and PsychoJS. In sum, MouseView.js is a freely available instrument for attention-tracking that is both reliable and valid, and that can replace eye tracking in certain web-based psychological experiments.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexandros Karargyris ◽  
Satyananda Kashyap ◽  
Ismini Lourentzou ◽  
Joy T. Wu ◽  
Arjun Sharma ◽  
...  

AbstractWe developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


2021 ◽  
Vol 2120 (1) ◽  
pp. 012030
Author(s):  
J K Tan ◽  
W J Chew ◽  
S K Phang

Abstract The field of Human-Computer Interaction (HCI) has been developing tremendously since the past decade. The existence of smartphones or modern computers is already a norm in society these days which utilizes touch, voice and typing as a means for input. To further increase the variety of interaction, human eyes are set to be a good candidate for another form of HCI. The amount of information which the human eyes contain are extremely useful, hence, various methods and algorithm for eye gaze tracking are implemented in multiple sectors. However, some eye-tracking method requires infrared rays to be projected into the eye of the user which could potentially cause enzyme denaturation when the eye is subjected to those rays under extreme exposure. Therefore, to avoid potential harm from the eye-tracking method that utilizes infrared rays, this paper proposes an image-based eye tracking system using the Viola-Jones algorithm and Circular Hough Transform (CHT) algorithm. The proposed method uses visible light instead of infrared rays to control the mouse pointer using the eye gaze of the user. This research aims to implement the proposed algorithm for people with hand disability to interact with computers using their eye gaze.


2020 ◽  
Vol 12 (2) ◽  
pp. 43
Author(s):  
Mateusz Pomianek ◽  
Marek Piszczek ◽  
Marcin Maciejewski ◽  
Piotr Krukowski

This paper describes research on the stability of the MEMS mirror for use in eye tracking systems. MEMS mirrors are the main element in scanning methods (which is one of the methods of eye tracking). Due to changes in the mirror pitch, the system can scan the area of the eye with a laser and collect the signal reflected. However, this method works on the assumption that the inclinations are constant in each period. The instability of this causes errors. The aim of this work is to examine the error level caused by pitch instability at different points of work. Full Text: PDF ReferencesW. Fuhl, M. Tonsen, A. Bulling, and E. Kasneci, "Pupil detection for head-mounted eye tracking in the wild: an evaluation of the state of the art," Mach. Vis. Appl., vol. 27, no. 8, pp. 1275-1288, 2016, CrossRef X. Wang, S. Koch, K. Holmqvist, and M. Alexa, "Tracking the gaze on objects in 3D," ACM Trans. Graph., vol. 37, no. 6, pp. 1-18, Dec. 2018 CrossRef X. Xiong and H. Xie, "MEMS dual-mode electrostatically actuated micromirror," Proc. 2014 Zo. 1 Conf. Am. Soc. Eng. Educ. - "Engineering Educ. Ind. Involv. Interdiscip. Trends", ASEE Zo. 1 2014, no. Dmd, 2014 CrossRef E. Pengwang, K. Rabenorosoa, M. Rakotondrabe, and N. Andreff, "Scanning micromirror platform based on MEMS technology for medical application," Micromachines, vol. 7, no. 2, 2016 CrossRef J. P. Giannini, A. G. York, and H. Shroff, "Anticipating, measuring, and minimizing MEMS mirror scan error to improve laser scanning microscopy's speed and accuracy," PLoS One, vol. 12, no. 10, pp. 1-14, 2017 CrossRef C. Hennessey, B. Noureddin, and P. Lawrence, "A single camera eye-gaze tracking system with free head motion," Eye Track. Res. Appl. Symp., vol. 2005, no. March, pp. 87-94, 2005 CrossRef C. H. Morimoto and M. R. M. Mimica, "Eye gaze tracking techniques for interactive applications," Comput. Vis. Image Underst., vol. 98, no. 1, pp. 4-24, Apr. 2005 CrossRef S. T. S. Holmström, U. Baran, and H. Urey, "MEMS laser scanners: A review," J. Microelectromechanical Syst., vol. 23, no. 2, pp. 259-275, 2014 CrossRef C. W. Cho, "Gaze Detection by Wearable Eye-Tracking and NIR LED-Based Head-Tracking Device Based on SVR," ETRI J., vol. 34, no. 4, pp. 542-552, Aug. 2012 CrossRef T. Santini, W. Fuhl, and E. Kasneci, "PuRe: Robust pupil detection for real-time pervasive eye tracking," Comput. Vis. Image Underst., vol. 170, pp. 40-50, May 2018 CrossRef O. Solgaard, A. A. Godil, R. T. Howe, L. P. Lee, Y. A. Peter, and H. Zappe, "Optical MEMS: From micromirrors to complex systems," J. Microelectromechanical Syst., vol. 23, no. 3, pp. 517-538, 2014 CrossRef J. Wang, G. Zhang, and Z. You, "UKF-based MEMS micromirror angle estimation for LiDAR," J. Micromechanics Microengineering, vol. 29, no. 3, 201 CrossRef


2021 ◽  
pp. 112972982098736
Author(s):  
Kaji Tatsuru ◽  
Yano Keisuke ◽  
Onishi Shun ◽  
Matsui Mayu ◽  
Nagano Ayaka ◽  
...  

Purpose: Real-time ultrasound (RTUS)-guided central venipuncture using the short-axis approach is complicated and likely to result in losing sight of the needle tip. Therefore, we focused on the eye gaze in our evaluation of the differences in eye gaze between medical students and experienced participants using an eye tracking system. Methods: Ten medical students (MS group), five residents (R group) and six pediatric surgeon fellows (F group) performed short-axis RTUS-guided venipuncture simulation using a modified vessel training system. The eye gaze was captured by the tracking system (Tobii Eye Tacker 4C) and recorded. The evaluation endpoints were the task completion time, total time and number of occurrences of the eye tracking marker outside US monitor and success rate of venipuncture. Result: There were no significant differences in the task completion time and total time of the tracking marker outside the US monitor. The number of occurrences of the eye tracking marker outside US monitor in the MS group was significantly higher than in the F group (MS group: 9.5 ± 3.4, R group: 6.0 ± 2.9, F group: 5.2 ± 1.6; p  = 0.04). The success rate of venipuncture in the R group tended to be better than in the F group. Conclusion: More experienced operators let their eye fall outside the US monitor fewer times than less experienced ones. The eye gaze was associated with the success rate of RTUS-guided venipuncture. Repeated training while considering the eye gaze seems to be pivotal for mastering RTUS-guided venipuncture.


2018 ◽  
Vol 61 (12) ◽  
pp. 2917-2933 ◽  
Author(s):  
Matthew James Valleau ◽  
Haruka Konishi ◽  
Roberta Michnick Golinkoff ◽  
Kathy Hirsh-Pasek ◽  
Sudha Arunachalam

Purpose We examined receptive verb knowledge in 22- to 24-month-old toddlers with a dynamic video eye-tracking test. The primary goal of the study was to examine the utility of eye-gaze measures that are commonly used to study noun knowledge for studying verb knowledge. Method Forty typically developing toddlers participated. They viewed 2 videos side by side (e.g., girl clapping, same girl stretching) and were asked to find one of them (e.g., “Where is she clapping?”). Their eye-gaze, recorded by a Tobii T60XL eye-tracking system, was analyzed as a measure of their knowledge of the verb meanings. Noun trials were included as controls. We examined correlations between eye-gaze measures and score on the MacArthur–Bates Communicative Development Inventories (CDI; Fenson et al., 1994), a standard parent report measure of expressive vocabulary to see how well various eye-gaze measures predicted CDI score. Results A common measure of knowledge—a 15% increase in looking time to the target video from a baseline phase to the test phase—did correlate with CDI score but operationalized differently for verbs than for nouns. A 2nd common measure, latency of 1st look to the target, correlated with CDI score for nouns, as in previous work, but did not for verbs. A 3rd measure, fixation density, correlated for both nouns and verbs, although the correlation went in different directions. Conclusions The dynamic nature of videos depicting verb knowledge results in differences in eye-gaze as compared to static images depicting nouns. An eye-tracking assessment of verb knowledge is worthwhile to develop. However, the particular dependent measures used may be different than those used for static images and nouns.


2017 ◽  
Vol 29 (2) ◽  
pp. 262-269 ◽  
Author(s):  
Timothy Stapleton ◽  
Helen Sumin Koo

Purpose The purpose of this paper is to investigate the effectiveness of biomotion visibility aids for nighttime bicyclists compared to other configurations via 3D eye-tracking technology in a blind between-subjects experiment. Design/methodology/approach A total of 40 participants were randomly assigned one of four visibility aid conditions in the form of videos: biomotion (retroreflective knee and ankle bands), non-biomotion (retroreflective vest configuration), pseudo-biomotion (vertical retroreflective stripes on the back of the legs), and control (all-black clothing). Gaze fixations on a screen were measured with a 3D eye-tracking system; coordinate data for each condition were analyzed via one-way ANOVA and Tukey’s post-hoc analyses with supplementary heatmaps. Post-experimental questionnaires addressed participants’ qualitative assessments. Findings Significant differences in eye gaze location were found between the four reflective clothing design conditions in X-coordinate values (p<0.01) and Y-coordinate values (p<0.05). Practical implications This research has the potential to further inform clothing designers and manufacturers on how to incorporate biomotion to increase bicyclist visibility and safety. Social implications This research has the potential to benefit both drivers and nighttime bicyclists through a better understanding of how biomotion can increase visibility and safety. Originality/value There is lack of literature addressing the issue of the commonly administered experimental task of recognizing bicyclists and its potential bias on participants’ attention and natural driving state. Eye-tracking has the potential to implicitly determine attention and visibility, devoid of biases to attention. A new retroreflective visibility aid design, pseudo-biomotion, was also introduced in this experiment.


2015 ◽  
Vol 8 (5) ◽  
Author(s):  
Antonio Lanata ◽  
Gaetano Valenza ◽  
Alberto Greco ◽  
Enzo Pasquale Scilingo

In this work, a new head mounted eye tracking system is presented. Based on computer vision techniques, the system integrates eye images and head movement, in real time, performing a robust gaze point tracking. Nystagmus movements due to vestibulo-ocular reflex are monitored and integrated. The system proposed here is a strongly improved version of a previous platform called HATCAM, which was robust against changes of illumination conditions. The new version, called HAT-Move, is equipped with accurate inertial motion unit to detect the head movement enabling eye gaze even in dynamical conditions. HAT-Move performance is investigated in a group of healthy subjects in both static and dynamic conditions, i.e. when head is kept still or free to move. Evaluation was performed in terms of amplitude of the angular error between the real coordinates of the fixed points and those computed by the system in two experimental setups, specifically, in laboratory settings and in a 3D virtual reality (VR) scenario. The achieved results showed that HAT-Move is able to achieve eye gaze angular error of about 1 degree along both horizontal and vertical directions


2020 ◽  
Vol 1 ◽  
pp. 113-131
Author(s):  
Rhonda McEwen ◽  
Asiya Atcha ◽  
Michelle Lui ◽  
Roula Shimaly ◽  
Amrita Maharaj ◽  
...  

This study analyzes the role of the machine as a communicative partner for children with complex communication needs as they use eye-tracking technology to communicate. We ask: to what extent do eye-tracking devices serve as functional communications systems for children with complex communication needs? We followed 12 children with profound physical disabilities in a special education classroom over 3 months. An eye-tracking system was used to collect data from software that assisted the children in facial recognition, task identification, and vocabulary building. Results show that eye gaze served as a functional communication system for the majority of the children. We found voice affect to be a strong determinant of communicative success between students and both of their communicative partners: the teachers (humans) and the technologies (machines).


Sign in / Sign up

Export Citation Format

Share Document