scholarly journals An algorithmic approach to determine expertise development using object-related gaze pattern sequences

Author(s):  
Felix S. Wang ◽  
Céline Gianduzzo ◽  
Mirko Meboldt ◽  
Quentin Lohmeyer

AbstractEye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification and measurement of distinct gaze patterns using traditional ET metrics has been challenging, and the insights gained shown to be inconclusive about the nature of expert gaze behavior. In this article, we introduce an algorithmic approach for the extraction of object-related gaze sequences and determine task-related expertise by investigating the development of gaze sequence patterns during a multi-trial study of a simplified airplane assembly task. We demonstrate the algorithm in a study where novice (n = 28) and expert (n = 2) eye movements were recorded in successive trials (n = 8), allowing us to verify whether similar patterns develop with increasing expertise. In the proposed approach, AOI sequences were transformed to string representation and processed using the k-mer method, a well-known method from the field of computational biology. Our results for expertise development suggest that basic tendencies are visible in traditional ET metrics, such as the fixation duration, but are much more evident for k-mers of k > 2. With increased on-task experience, the appearance of expert k-mer patterns in novice gaze sequences was shown to increase significantly (p < 0.001). The results illustrate that the multi-trial k-mer approach is suitable for revealing specific cognitive processes and can quantify learning progress using gaze patterns that include both spatial and temporal information, which could provide a valuable tool for novice training and expert assessment.

2021 ◽  
Vol 5 ◽  
Author(s):  
Christian Kosel ◽  
Doris Holzberger ◽  
Tina Seidel

The paper addresses cognitive processes during a teacher's professional task of assessing learning-relevant student characteristics. We explore how eye-movement patterns (scanpaths) differ across expert and novice teachers during an assessment situation. In an eye-tracking experiment, participants watched an authentic video of a classroom lesson and were subsequently asked to assess five different students. Instead of using typically reported averaged gaze data (e.g., number of fixations), we used gaze patterns as an indicator for visual behavior. We extracted scanpath patterns, compared them qualitatively (common sub-pattern) and quantitatively (scanpath entropy) between experts and novices, and related teachers' visual behavior to their assessment competence. Results show that teachers' scanpaths were idiosyncratic and more similar to teachers of the same expertise group. Moreover, experts monitored all target students more regularly and made recurring scans to re-adjust their assessment. Lastly, this behavior was quantified using Shannon's entropy score. Results indicate that experts' scanpaths were more complex, involved more frequent revisits of all students, and that experts transferred their attention between all students with equal probability. Experts' visual behavior was also statistically related to higher judgment accuracy.


Author(s):  
Samiullah Paracha ◽  
Toshiro Takahara ◽  
Sania Jehanzeb

The main goal of this research is to investigate how learners with different cultural background differ in their interaction style and visual behavior in multimedia-enhanced education, more specifically between groups from the African vs. Asian regions. The researchers conducted a controlled eye-tracking experiment to explore and evaluate the visual behavior of African, Afghan, Japanese and Chinese learners when scanning through different online multimedia contents. The analysis of their eye-gaze patterns and heat-maps revealed significant differences in terms of learners' interaction style, gender, color, text or multimedia preferences. This cross-cultural investigation collectively contributes towards effective use of multimedia technologies in education that ultimately increases learners' engagement and retention.


2020 ◽  
pp. 073563312097861
Author(s):  
Marko Pejić ◽  
Goran Savić ◽  
Milan Segedinac

This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement coordinates into a sequence of regions of interest. The proposed software system extracts features from the sequence and performs clustering that groups students by their gaze pattern. To determine gaze patterns, the system contains components for communicating with an eye-tracking device, collecting and preprocessing students’ gaze data, and visualizing data using different presentation methods. This study presents a methodology to determine gaze patterns and the implementation details of the proposed software. The research was evaluated by determining the gaze patterns of 51 undergraduate students who took a general knowledge test containing 20 questions. This study aims to provide a software infrastructure that can use students’ gaze patterns as an additional indicator of their reading behaviors and their processing attention or difficulty, among other factors.


2018 ◽  
Vol 120 (4) ◽  
pp. 1602-1615 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Mohammed Albaghdadi ◽  
J. Randall Flanagan ◽  
Jason P. Gallivan

Successful motor performance relies on our ability to adapt to changes in the environment by learning novel mappings between motor commands and sensory outcomes. Such adaptation is thought to involve two distinct mechanisms: an implicit, error-based component linked to slow learning and an explicit, strategic component linked to fast learning and savings (i.e., faster relearning). Because behavior, at any given moment, is the resultant combination of these two processes, it has remained a challenge to parcellate their relative contributions to performance. The explicit component to visuomotor rotation (VMR) learning has recently been measured by having participants verbally report their aiming strategy used to counteract the rotation. However, this procedure has been shown to magnify the explicit component. Here we tested whether task-specific eye movements, a natural component of reach planning, but poorly studied in motor learning tasks, can provide a direct readout of the state of the explicit component during VMR learning. We show, by placing targets on a visible ring and including a delay between target presentation and reach onset, that individual differences in gaze patterns during sensorimotor learning are linked to participants’ rates of learning and their expression of savings. Specifically, we find that participants who, during reach planning, naturally fixate an aimpoint rotated away from the target location, show faster initial adaptation and readaptation 24 h later. Our results demonstrate that gaze behavior cannot only uniquely identify individuals who implement cognitive strategies during learning but also how their implementation is linked to differences in learning. NEW & NOTEWORTHY Although it is increasingly well appreciated that sensorimotor learning is driven by two separate components, an error-based process and a strategic process, it has remained a challenge to identify their relative contributions to performance. Here we demonstrate that task-specific eye movements provide a direct read-out of explicit strategies during sensorimotor learning in the presence of visual landmarks. We further show that individual differences in gaze behavior are linked to learning rate and savings.


Author(s):  
Simon Harrison ◽  
Robert F. Williams

Abstract Lifeguards stationed opposite their swimzone on a beach in southwest France huddle around a diagram in the sand; the Head Lifeguard points to the sun then looks at the swimzone. What is going on here? Our paper examines two excerpts from this interaction to explore how lifeguards manage an instruction activity that arises in addition to the task of monitoring the swimzone. Building on frame analysis and multiactivity in social interaction, we focus on the role of gaze behavior in maintaining a sustained orientation to the swimzone as a distinct activity in this setting. Multimodal, sequential analyses of extracts from the video data show that orientation to the lifeguarding task is sustained primarily by body orientation and gaze patterns that routinely return to the swimzone. This is supported when sustained orientation away from the swimzone leads to the momentary suspension of the instruction activity and consequent re-organization of the interaction, illustrating the normative and visible nature of managing multiactivity. These gaze behaviors and interactive patterns constitute practices of professional vision among beach lifeguards.


2017 ◽  
Vol 54 (5) ◽  
pp. 562-570 ◽  
Author(s):  
Rayson Holly ◽  
E. Parsons Christine ◽  
S. Young Katherine ◽  
Timothy E.E. Goodacre ◽  
Morten L. Kringelbach ◽  
...  

Objective: Early mother-infant interactions are impaired in the context of infant cleft lip and are associated with adverse child psychological outcomes, but the nature of these interaction difficulties is not yet fully understood. The aim of this study was to explore adult gaze behavior and cuteness perception, which are particularly important during early social exchanges, in response to infants with cleft lip, in order to investigate potential foundations for the interaction difficulties seen in this population. Methods: Using an eye tracker, eye movements were recorded as adult participants viewed images of infant faces with and without cleft lip. Participants also rated each infant on a scale of cuteness. Results: Participants fixated significantly longer on the mouths of infants with cleft lip, which occurred at the expense of fixation on eyes. Severity of cleft lip was associated with the strength of fixation bias, with participants looking even longer at the mouths of infants with the most severe clefts. Infants with cleft lip were rated as significantly less cute than unaffected infants. Men rated infants as less cute than women overall but gave particularly low ratings to infants with cleft lip. Conclusions: Results demonstrate that the limited disturbance in infant facial configuration of cleft lip can significantly alter adult gaze patterns and cuteness perception. Our findings could have important implications for early interactions and may help in the development of interventions to foster healthy development in infants with cleft lip.


2015 ◽  
Vol 21 (3) ◽  
pp. 627-642 ◽  
Author(s):  
Christina St-Onge ◽  
Martine Chamberland ◽  
Annie Lévesque ◽  
Lara Varpio

Author(s):  
Abner Cardoso Da Silva ◽  
Alberto Barbosa Raposo ◽  
Cesar Augusto Sierra Franco

The easier access to virtual reality head-mounted displays have assisted the use of this technology on research. In parallel, the integration of those devices with eye-trackers enabled new perspectives of visual attention analysis in virtual environments. Different research and application fields found in such technologies a viable way to train and assess individuals by reproducing, with low cost, situations that are not so easily recreated in real life. In this context, our proposal aims to develop a model to measure characteristics of safety professional’s gaze behavior during the hazard detection process.


2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Madelene Alanenpää ◽  
Ginevra Castellano

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.


Sign in / Sign up

Export Citation Format

Share Document