The Effects of Calibration Target, Screen Location, and Movement Type on Infant Eye‐Tracking Data Quality

Infancy ◽  
2019 ◽  
Vol 24 (4) ◽  
pp. 636-662 ◽  
Author(s):  
Karola Schlegelmilch ◽  
Annie E. Wertz
PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254867
Author(s):  
Jennifer Kee ◽  
Melinda Knuth ◽  
Joanna N. Lahey ◽  
Marco A. Palma

Eye-tracking is becoming an increasingly popular tool for understanding the underlying behavior driving human decisions. However, an important unanswered methodological question is whether the use of an eye-tracking device itself induces changes in participants’ behavior. We study this question using eight popular games in experimental economics chosen for their varying levels of theorized susceptibility to social desirability bias. We implement a simple between-subject design where participants are randomly assigned to either a control or an eye-tracking treatment. In seven of the eight games, eye-tracking did not produce different outcomes. In the Holt and Laury risk assessment (HL), subjects with multiple calibration attempts demonstrated more risk averse behavior in eye-tracking conditions. However, this effect only appeared during the first five (of ten) rounds. Because calibration difficulty is correlated with eye-tracking data quality, the standard practice of removing participants with low eye-tracking data quality resulted in no difference between the treatment and control groups in HL. Our results suggest that experiments may incorporate eye-tracking equipment without inducing changes in the economic behavior of participants, particularly after observations with low quality eye-tracking data are removed.


2021 ◽  
Author(s):  
Tim Schneegans ◽  
Matthew D. Bachman ◽  
Scott A. Huettel ◽  
Hauke Heekeren

Recent developments of open-source online eye-tracking algorithms suggests that they may be ready for use in online studies, thereby overcoming the limitations of in-lab eye-tracking studies. However, to date there have been limited tests of the efficacy of online eye-tracking for decision-making and cognitive psychology. In this online study, we explore the potential and the limitations of online eye-tracking tools for decision-making research using the webcam-based open-source library Webgazer (Papoutsaki et al., 2016). Our study had two aims. For our first aim we assessed different variables that might affect the quality of eye-tracking data. In our experiment (N = 210) we measured a within-subjects variable of adding a provisional chin rest and a between-subjects variable of corrected vs uncorrected vision. Contrary to our hypotheses, we found that the chin rest had a negative effect on data quality. In accordance with out hypotheses, we found lower quality data in participants who wore glasses. Other influence factors are discussed, such as the frame rate. For our second aim (N = 44) we attempted to replicate a decision-making paradigm where eye-tracking data was acquired using offline means (Amasino et al., 2019). We found some relations between choice behavior and eye-tracking measures, such as the last fixation and the distribution of gaze points at the moment right before the choice. However, several effects could not be reproduced, such as the overall distribution of gaze points or dynamic search strategies. Therefore, our hypotheses only find partial evidence. This study gives practical insights for the feasibility of online eye-tacking for decision making research as well as researchers from other disciplines.


Infancy ◽  
2015 ◽  
Vol 20 (6) ◽  
pp. 601-633 ◽  
Author(s):  
Roy S. Hessels ◽  
Richard Andersson ◽  
Ignace T. C. Hooge ◽  
Marcus Nyström ◽  
Chantal Kemner

2020 ◽  
Vol 52 (6) ◽  
pp. 2515-2534 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Raimondas Zemblys ◽  
Tanya Beelders ◽  
Kenneth Holmqvist

AbstractThe magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.


2022 ◽  
Vol 12 ◽  
Author(s):  
Anna Bánki ◽  
Martina de Eccher ◽  
Lilith Falschlehner ◽  
Stefanie Hoehl ◽  
Gabriela Markova

Online data collection with infants raises special opportunities and challenges for developmental research. One of the most prevalent methods in infancy research is eye-tracking, which has been widely applied in laboratory settings to assess cognitive development. Technological advances now allow conducting eye-tracking online with various populations, including infants. However, the accuracy and reliability of online infant eye-tracking remain to be comprehensively evaluated. No research to date has directly compared webcam-based and in-lab eye-tracking data from infants, similarly to data from adults. The present study provides a direct comparison of in-lab and webcam-based eye-tracking data from infants who completed an identical looking time paradigm in two different settings (in the laboratory or online at home). We assessed 4-6-month-old infants (n = 38) in an eye-tracking task that measured the detection of audio-visual asynchrony. Webcam-based and in-lab eye-tracking data were compared on eye-tracking and video data quality, infants’ viewing behavior, and experimental effects. Results revealed no differences between the in-lab and online setting in the frequency of technical issues and participant attrition rates. Video data quality was comparable between settings in terms of completeness and brightness, despite lower frame rate and resolution online. Eye-tracking data quality was higher in the laboratory than online, except in case of relative sample loss. Gaze data quantity recorded by eye-tracking was significantly lower than by video in both settings. In valid trials, eye-tracking and video data captured infants’ viewing behavior uniformly, irrespective of setting. Despite the common challenges of infant eye-tracking across experimental settings, our results point toward the necessity to further improve the precision of online eye-tracking with infants. Taken together, online eye-tracking is a promising tool to assess infants’ gaze behavior but requires careful data quality control. The demographic composition of both samples differed from the generic population on caregiver education: our samples comprised caregivers with higher-than-average education levels, challenging the notion that online studies will per se reach more diverse populations.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2015 ◽  
Vol 23 (9) ◽  
pp. 1508
Author(s):  
Qiandong WANG ◽  
Qinggong LI ◽  
Kaikai CHEN ◽  
Genyue FU

2019 ◽  
Vol 19 (2) ◽  
pp. 345-369 ◽  
Author(s):  
Constantina Ioannou ◽  
Indira Nurdiani ◽  
Andrea Burattin ◽  
Barbara Weber

Sign in / Sign up

Export Citation Format

Share Document