scholarly journals Fixation classification: how to merge and select fixation candidates

Author(s):  
Ignace T. C. Hooge ◽  
Diederick C. Niehorster ◽  
Marcus Nyström ◽  
Richard Andersson ◽  
Roy S. Hessels

AbstractEye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.

2019 ◽  
Vol 12 (2) ◽  
Author(s):  
Sangwon Lee ◽  
Yongha Hwang ◽  
Yan Jin ◽  
Sihyeong Ahn ◽  
Jaewan Park

Machine learning, particularly classification algorithms, constructs mathematical models from labeled data that can predict labels for new data. Using its capability to identify distinguishing patterns among multi-dimensional data, we investigated the impact of three factors on the observation of architectural scenes: Individuality, education, and image stimuli. An analysis of the eye-tracking data revealed that (1) a velocity histogram was unique to individuals, (2) students of architecture and other disciplines could be distinguished via endogenous parameters, but (3) they were more distinct in terms of seeking structural versus symbolic elements. Because of the reverse nature of the classification algorithms that automatically learn from data, we could identify relevant parameters and distinguishing eye-tracking patterns that have not been reported in previous studies.


2018 ◽  
Vol 95 (4) ◽  
pp. 948-970 ◽  
Author(s):  
Edmund W. J. Lee ◽  
Shirley S. Ho

This study examines the impact of photographic–textual and risk–benefit frames on the level of visual attention, risk perception, and public support for nuclear energy and nanotechnology in Singapore. Using a 2 (photographic–textual vs. textual-only frames) × 2 (risk vs. benefit frames) × 2 (nuclear energy vs. nanotechnology) between-subject design with eye-tracking data, the results showed that photographic–textual frames elicited more attention and did have partial amplification effect. However, this was observable only in the context of nuclear energy, where public support was lowest when participants were exposed to risk frames accompanied by photographs. Implications for theory and practice were discussed.


2013 ◽  
Vol 55 (1) ◽  
pp. 105-130 ◽  
Author(s):  
Christian Purucker ◽  
Jan R. Landwehr ◽  
David E. Sprott ◽  
Andreas Herrmann

Analysis of eye-tracking data in marketing research has traditionally relied upon regions of interest (ROIs) methodology or the use of heatmaps. Clear disadvantages exist for both methods. Addressing this gap, the current research applies spatiotemporal scan statistics to the analysis and visualisation of eye tracking data. Results of a sample experiment using anthropomorphic car faces demonstrate several advantages provided by the new method. In contrast to traditional approaches, scan statistics provide a means to scan eye tracking data automatically in space and time with differing gaze clusters, with results able to be comprehensively visualised and statistically assessed.


2011 ◽  
Vol 40 (594) ◽  
Author(s):  
Susanne Bødker

<span style="font-family: Arial; font-size: x-small;"><span style="font-family: Arial; font-size: x-small;"><p>Dual eye-tracking (DUET) is a promising methodology to study and support</p> <p>collaborative work. The method consists of simultaneously recording the gaze of two</p> <p>collaborators working on a common task. The main themes addressed in the workshop</p> <p>are eye-tracking methodology (how to translate gaze measures into descriptions of joint</p> <p>action, how to measure and model gaze alignment between collaborators, how to address</p> <p>task specificity inherent to eye-tracking data) and more generally future applications of</p> <p>dual eye-tracking in CSCW. The DUET workshop will bring together scholars who</p> <p>currently develop the approach as well as a larger audience interested in applications of</p> <p>eye-tracking in collaborative situations. The workshop format will combine paper</p> <p>presentations and discussions. The papers are available online as PDF documents at</p> <p>http://www.dualeyetracking.org/DUET2011/.</p></span></span>


2019 ◽  
Vol 64 (2) ◽  
pp. 286-308
Author(s):  
El Mehdi Ibourk ◽  
Amer Al-Adwan

Abstract The recent years have witnessed the emergence of new approaches in filmmaking including virtual reality (VR), which is meant to achieve an immersive viewing experience through advanced electronic devices, such as VR headsets. The VR industry is oriented toward developing content mainly in English and Japanese, leaving vast audiences unable to understand the original content or even enjoy this novel technology due to language barriers. This paper examines the impact of the subtitles on the viewing experience and behaviour of eight Arab participants in understanding the content in Arabic through eye tracking technology. It also provides an insight on the mechanism of watching a VR 360-degree documentary and the factors that lead viewers to favour one subtitling mode over the other in the spherical environment. For this end, a case study was designed to produce 120-degree subtitles and Follow Head Immediately subtitles, followed by the projection of the subtitled documentary through an eye tracking VR headset. The analysis of the eye tracking data is combined with post-viewing interviews in order to better understand the viewing experience of the Arab audience, their cognitive reception and the reasons leading to favour one type of subtitles over the other.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jessica Dawson ◽  
Alan Kingstone ◽  
Tom Foulsham

AbstractPeople are drawn to social, animate things more than inanimate objects. Previous research has also shown gaze following in humans, a process that has been linked to theory of mind (ToM). In three experiments, we investigated whether animacy and ToM are involved when making judgements about the location of a cursor in a scene. In Experiment 1, participants were told that this cursor represented the gaze of an observer and were asked to decide whether the observer was looking at a target object. This task is similar to that carried out by researchers manually coding eye-tracking data. The results showed that participants were biased to perceive the gaze cursor as directed towards animate objects (faces) compared to inanimate objects. In Experiments 2 and 3 we tested the role of ToM, by presenting the same scenes to new participants but now with the statement that the cursor was generated by a ‘random’ computer system or by a computer system designed to seek targets. The bias to report that the cursor was directed toward faces was abolished in Experiment 2, and minimised in Experiment 3. Together, the results indicate that people attach minds to the mere representation of an individual's gaze, and this attribution of mind influences what people believe an individual is looking at.


2019 ◽  
Vol 40 (8) ◽  
pp. 850-861 ◽  
Author(s):  
Piotr Pietruski ◽  
Bartłomiej Noszczyk ◽  
Adriana M Paskal ◽  
Wiktor Paskal ◽  
Łukasz Paluch ◽  
...  

Abstract Background Little is known about breast cancer survivors’ perception of breast attractiveness. A better understanding of this subjective concept could contribute to the improvement of patient-reported outcomes after reconstructive surgeries and facilitate the development of new methods for assessing breast reconstruction outcomes. Objectives The aim of this eye-tracking (ET)-based study was to verify whether mastectomy altered women’s visual perception of breast aesthetics and symmetry. Methods A group of 30 women after unilateral mastectomy and 30 healthy controls evaluated the aesthetics and symmetry of various types of female breasts displayed as highly standardized digital images. Gaze patterns of women from the study groups were recorded using an ET system and subjected to a comparative analysis. Results Regardless of the study group, the longest fixation duration and the highest fixation number were found in the nipple-areola complex. This area was also the most common region of the initial fixation. Several significant between-group differences were identified; the gaze patterns of women after mastectomy were generally characterized by longer fixation times for the inframammary fold, lower pole, and upper half of the breast. Conclusions Mastectomy might affect women’s visual perception patterns during the evaluation of breast aesthetics and symmetry. ET data might improve our understanding of breast attractiveness and constitute the basis for a new reliable method for the evaluation of outcomes of reconstructive breast surgeries.


2020 ◽  
Vol 10 (13) ◽  
pp. 4508 ◽  
Author(s):  
Armel Quentin Tchanou ◽  
Pierre-Majorique Léger ◽  
Jared Boasen ◽  
Sylvain Senecal ◽  
Jad Adam Taher ◽  
...  

Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.


Author(s):  
Anine Riege ◽  
Amélie Gourdon-Kanhukamwe ◽  
Gaëlle Vallée-Tourangeau

AbstractThe present study introduces a covert eye-tracking procedure as an innovative approach to investigate the adequacy of research paradigms used in psychology. In light of the ongoing debate regarding ego depletion, the frequently used “attention-control video task” was chosen to illustrate the method. Most participants did not guess that their eyes had been monitored, but some participants had to be excluded due to poor tracking ratio. The eye-tracking data revealed that the attention-control instructions had a significant impact on the number of fixations, revisits, fixation durations, and proportion of long fixation durations on the AOIs (all BF10 > 18.2). However, number of fixations and proportions of long fixation durations did not mediate cognitive performance. The results illustrate the promise of covert eye-tracking methodology to assess task compliance, as well as adding to the current discussion regarding whether the difficulties of replicating “ego depletion” may be in part due to poor task compliance in the video task.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7668
Author(s):  
Niharika Kumari ◽  
Verena Ruf ◽  
Sergey Mukhametov ◽  
Albrecht Schmidt ◽  
Jochen Kuhn ◽  
...  

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.


Sign in / Sign up

Export Citation Format

Share Document