scholarly journals Exploring the Potential of Online Webcam-based Eye Tracking in Decision-Making Research and Influence Factors on Data Quality

2021 ◽  
Author(s):  
Tim Schneegans ◽  
Matthew D. Bachman ◽  
Scott A. Huettel ◽  
Hauke Heekeren

Recent developments of open-source online eye-tracking algorithms suggests that they may be ready for use in online studies, thereby overcoming the limitations of in-lab eye-tracking studies. However, to date there have been limited tests of the efficacy of online eye-tracking for decision-making and cognitive psychology. In this online study, we explore the potential and the limitations of online eye-tracking tools for decision-making research using the webcam-based open-source library Webgazer (Papoutsaki et al., 2016). Our study had two aims. For our first aim we assessed different variables that might affect the quality of eye-tracking data. In our experiment (N = 210) we measured a within-subjects variable of adding a provisional chin rest and a between-subjects variable of corrected vs uncorrected vision. Contrary to our hypotheses, we found that the chin rest had a negative effect on data quality. In accordance with out hypotheses, we found lower quality data in participants who wore glasses. Other influence factors are discussed, such as the frame rate. For our second aim (N = 44) we attempted to replicate a decision-making paradigm where eye-tracking data was acquired using offline means (Amasino et al., 2019). We found some relations between choice behavior and eye-tracking measures, such as the last fixation and the distribution of gaze points at the moment right before the choice. However, several effects could not be reproduced, such as the overall distribution of gaze points or dynamic search strategies. Therefore, our hypotheses only find partial evidence. This study gives practical insights for the feasibility of online eye-tacking for decision making research as well as researchers from other disciplines.

2022 ◽  
Vol 12 ◽  
Author(s):  
Anna Bánki ◽  
Martina de Eccher ◽  
Lilith Falschlehner ◽  
Stefanie Hoehl ◽  
Gabriela Markova

Online data collection with infants raises special opportunities and challenges for developmental research. One of the most prevalent methods in infancy research is eye-tracking, which has been widely applied in laboratory settings to assess cognitive development. Technological advances now allow conducting eye-tracking online with various populations, including infants. However, the accuracy and reliability of online infant eye-tracking remain to be comprehensively evaluated. No research to date has directly compared webcam-based and in-lab eye-tracking data from infants, similarly to data from adults. The present study provides a direct comparison of in-lab and webcam-based eye-tracking data from infants who completed an identical looking time paradigm in two different settings (in the laboratory or online at home). We assessed 4-6-month-old infants (n = 38) in an eye-tracking task that measured the detection of audio-visual asynchrony. Webcam-based and in-lab eye-tracking data were compared on eye-tracking and video data quality, infants’ viewing behavior, and experimental effects. Results revealed no differences between the in-lab and online setting in the frequency of technical issues and participant attrition rates. Video data quality was comparable between settings in terms of completeness and brightness, despite lower frame rate and resolution online. Eye-tracking data quality was higher in the laboratory than online, except in case of relative sample loss. Gaze data quantity recorded by eye-tracking was significantly lower than by video in both settings. In valid trials, eye-tracking and video data captured infants’ viewing behavior uniformly, irrespective of setting. Despite the common challenges of infant eye-tracking across experimental settings, our results point toward the necessity to further improve the precision of online eye-tracking with infants. Taken together, online eye-tracking is a promising tool to assess infants’ gaze behavior but requires careful data quality control. The demographic composition of both samples differed from the generic population on caregiver education: our samples comprised caregivers with higher-than-average education levels, challenging the notion that online studies will per se reach more diverse populations.


2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

2021 ◽  
Vol 27 (3) ◽  
pp. 146045822110431
Author(s):  
Tajebew Z Gonete ◽  
Lake Yazachew ◽  
Berhanu F Endehabtu

Quality data for evidence-based decision making become a growing concern globally. Available information needs to be disseminated on time and used for decision making. Therefore, an effective Health Management Information System is essential to make evidence-based decision. This study aimed to measure the change in data quality and information utilization before and after intervention. Facility-based pre-post interventional study design was conducted at Metema hospital from September/2016 to December30/2018. A total of 384 individual medical-records, HMIS registration-books and reports were reviewed. Training, supportive supervision and feedback were intervention packages. About 309 (80.5%) of charts were from outpatient department. Data recording completeness increased from 69.0% to 96.0%, data consistency increased from 84.0% to 99.5% and report timeliness enhanced from 66.0% to 100%. There was a statistically significant difference for data recording completeness between pre and post-intervention results with mean difference of −0.246 (−0.412, −0.081). Also, after the intervention, gap-filling feedback and supportive supervision were given to all departments. In addition, four quality improvement projects were developed at post-intervention phase. The level of data quality and use was improved after the intervention. So, designing and implementing intervention strategies based on the root causes will help to improve data quality and use.


2018 ◽  
Vol 38 (6) ◽  
pp. 658-672 ◽  
Author(s):  
Caroline Vass ◽  
Dan Rigby ◽  
Kelly Tate ◽  
Andrew Stewart ◽  
Katherine Payne

Background. Discrete choice experiments (DCEs) are increasingly used to elicit preferences for benefit-risk tradeoffs. The primary aim of this study was to explore how eye-tracking methods can be used to understand DCE respondents’ decision-making strategies. A secondary aim was to explore if the presentation and communication of risk affected respondents’ choices. Method. Two versions of a DCE were designed to understand the preferences of female members of the public for breast screening that varied in how risk attributes were presented. Risk was communicated as either 1) percentages or 2) icon arrays and percentages. Eye-tracking equipment recorded eye movements 1000 times a second. A debriefing survey collected sociodemographics and self-reported attribute nonattendance (ANA) data. A heteroskedastic conditional logit model analyzed DCE data. Eye-tracking data on pupil size, direction of motion, and total visual attention (dwell time) to predefined areas of interest were analyzed using ordinary least squares regressions. Results. Forty women completed the DCE with eye-tracking. There was no statistically significant difference in attention (fixations) to attributes between the risk communication formats. Respondents completing either version of the DCE with the alternatives presented in columns made more horizontal (left-right) saccades than vertical (up-down). Eye-tracking data confirmed self-reported ANA to the risk attributes with a 40% reduction in mean dwell time to the “probability of detecting a cancer” ( P = 0.001) and a 25% reduction to the “risk of unnecessary follow-up” ( P = 0.008). Conclusion. This study is one of the first to show how eye-tracking can be used to understand responses to a health care DCE and highlighted the potential impact of risk communication on respondents’ decision-making strategies. The results suggested self-reported ANA to cost attributes may not be reliable.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254867
Author(s):  
Jennifer Kee ◽  
Melinda Knuth ◽  
Joanna N. Lahey ◽  
Marco A. Palma

Eye-tracking is becoming an increasingly popular tool for understanding the underlying behavior driving human decisions. However, an important unanswered methodological question is whether the use of an eye-tracking device itself induces changes in participants’ behavior. We study this question using eight popular games in experimental economics chosen for their varying levels of theorized susceptibility to social desirability bias. We implement a simple between-subject design where participants are randomly assigned to either a control or an eye-tracking treatment. In seven of the eight games, eye-tracking did not produce different outcomes. In the Holt and Laury risk assessment (HL), subjects with multiple calibration attempts demonstrated more risk averse behavior in eye-tracking conditions. However, this effect only appeared during the first five (of ten) rounds. Because calibration difficulty is correlated with eye-tracking data quality, the standard practice of removing participants with low eye-tracking data quality resulted in no difference between the treatment and control groups in HL. Our results suggest that experiments may incorporate eye-tracking equipment without inducing changes in the economic behavior of participants, particularly after observations with low quality eye-tracking data are removed.


2016 ◽  
Vol 9 (4) ◽  
Author(s):  
Francesco Di Nocera ◽  
Claudio Capobianco ◽  
Simon Mastrangelo

This short paper describes an update of A Simple Tool For Examining Fixations (ASTEF) developed for facilitating the examination of eye-tracking data and for computing a spatial statistics algorithm that has been validated as a measure of mental workload (namely, the Nearest Neighbor Index: NNI). The code is based on Matlab® 2013a and is currently distributed on the web as an open-source project. This implementation of ASTEF got rid of many functionalities included in the previous version that are not needed anymore considering the large availability of commercial and open-source software solutions for eye-tracking. That makes it very easy to compute the NNI on eye-tracking data without the hassle of learning complicated tools. The software also features an export function for creating the time series of the NNI values computed on each minute of the recording. This feature is crucial given that the spatial distribution of fixations must be used to test hypotheses about the time course of mental load.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 55
Author(s):  
Kar ◽  
Corcoran

In this paper, a range of open-source tools, datasets, and software that have been developed for quantitative and in-depth evaluation of eye gaze data quality are presented. Eye tracking systems in contemporary vision research and applications face major challenges due to variable operating conditions such as user distance, head pose, and movements of the eye tracker platform. However, there is a lack of open-source tools and datasets that could be used for quantitatively evaluating an eye tracker’s data quality, comparing performance of multiple trackers, or studying the impact of various operating conditions on a tracker’s accuracy. To address these issues, an open-source code repository named GazeVisual-Lib is developed that contains a number of algorithms, visualizations, and software tools for detailed and quantitative analysis of an eye tracker’s performance and data quality. In addition, a new labelled eye gaze dataset that is collected from multiple user platforms and operating conditions is presented in an open data repository for benchmark comparison of gaze data from different eye tracking systems. The paper presents the concept, development, and organization of these two repositories that are envisioned to improve the performance analysis and reliability of eye tracking systems.


Sign in / Sign up

Export Citation Format

Share Document