gaze behaviour
Recently Published Documents


TOTAL DOCUMENTS

184
(FIVE YEARS 87)

H-INDEX

17
(FIVE YEARS 4)

Author(s):  
T. van Biemen ◽  
R.R.D. Oudejans ◽  
G.J.P. Savelsbergh ◽  
F. Zwenk ◽  
D.L. Mann

In foul decision-making by football referees, visual search is important for gathering task-specific information to determine whether a foul has occurred. Yet, little is known about the visual search behaviours underpinning excellent on-field decisions. The aim of this study was to examine the on-field visual search behaviour of elite and sub-elite football referees when calling a foul during a match. In doing so, we have also compared the accuracy and gaze behaviour for correct and incorrect calls. Elite and sub-elite referees (elite: N = 5, Mage  ±  SD = 29.8 ± 4.7yrs, Mexperience  ±  SD = 14.8 ± 3.7yrs; sub-elite: N = 9, Mage  ±  SD = 23.1 ± 1.6yrs, Mexperience  ±  SD = 8.4 ± 1.8yrs) officiated an actual football game while wearing a mobile eye-tracker, with on-field visual search behaviour compared between skill levels when calling a foul (Nelite = 66; Nsub−elite = 92). Results revealed that elite referees relied on a higher search rate (more fixations of shorter duration) compared to sub-elites, but with no differences in where they allocated their gaze, indicating that elites searched faster but did not necessarily direct gaze towards different locations. Correct decisions were associated with higher gaze entropy (i.e. less structure). In relying on more structured gaze patterns when making incorrect decisions, referees may fail to pick-up information specific to the foul situation. Referee development programmes might benefit by challenging the speed of information pickup but by avoiding pre-determined gaze patterns to improve the interpretation of fouls and increase the decision-making performance of referees.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


2021 ◽  
Vol 9 (4) ◽  
pp. 92-115
Author(s):  
Olli Maatta ◽  
Nora McIntyre ◽  
Jussi Palomäki ◽  
Markku S. Hannula ◽  
Patrik Scheinin ◽  
...  

Abstract Mobile eye-tracking research has provided evidence both on teachers' visual attention in relation to their intentions and on teachers’ student-centred gaze patterns. However, the importance of a teacher’s eye-movements when giving instructions is unexplored. In this study we used mobile eye-tracking to investigate six teachers’ gaze patterns when they are giving task instructions for a geometry problem in four different phases of a mathematical problem-solving lesson. We analysed the teachers’ eye-tracking data, their verbal data, and classroom video recordings. Our paper brings forth a novel interpretative lens for teacher’s pedagogical intentions communicated by gaze during teacher-led moments such as when introducing new tasks, reorganizing the social structures of students for collaboration, and lesson wrap-ups. A change in the students’ task changes teachers’ gaze patterns, which may indicate a change in teacher’s pedagogical intention. We found that teachers gazed at students throughout the lesson, whereas teachers’ focus was at task-related targets during collaborative instruction-giving more than during the introductory and reflective task instructions. Hence, we suggest two previously not detected gaze types: contextualizing gaze for task readiness and collaborative gaze for task focus to contribute to the present discussion on teacher gaze


2021 ◽  
pp. 147715352110557
Author(s):  
A Batool ◽  
P Rutherford ◽  
P McGraw ◽  
T Ledgeway ◽  
S Altomonte

When looking out of a window, natural views are usually associated with restorative qualities and are given a higher preference than urban scenes. Previous research has shown that gaze behaviour might differ based on the natural or urban content of views. A lower number of fixations has been associated with the aesthetic evaluation of natural scenes while, when looking at an urban environment, a high preference has been correlated with more exploratory gaze behaviours. To characterise gaze correlates of view preference across natural and urban scenes, we collected and analysed experimental data featuring subjective preference ratings, eye-tracking measures, verbal reasoning associated with preference and nature relatedness scores. Consistent with the literature, our results confirm that natural scenes are more preferred than urban views and that gaze behaviours depend on view type and preference. Observing natural scenes was characterised by lower numbers of fixations and saccades, and longer fixation durations, compared to urban views. However, for both view types, most preferred scenes led to more fixations and saccades. Our findings also showed that nature relatedness may be correlated with visual exploration of scenes. Individual preferences and personality attributes, therefore, should be accounted for in studies on view preference and gaze behaviour.


2021 ◽  
Author(s):  
Malgorzata Kasprzyk ◽  
Margaret Jackson ◽  
Bert Timmermans

We investigated whether the reward that has previously been associated with initiated joint attention (the experience of having one’s gaze followed by someone else; Pfeiffer et al., 2014, Schilbach et al., 2010) can influence gaze behaviour and, similarly to monetary rewards (Blaukopf & DiGirolamo, 2005; Manohar et al., 2017; Milstein & Dorris, 2007), elicit learning effects. To this end, we adapted Milstein and Dorris (2007) gaze contingent paradigm, so it required participants to look at an anthropomorphic avatar and then conduct a saccade towards the left or right peripheral target. If participants were fast enough, they could experience social reward in terms of the avatar looking at the same target as they did and thus engaging with them in joint attention. One side had higher reward probability than the other (80 % vs 20 %; on the other fast trials the avatar would simply keep staring ahead). We expected that if participants learned about the reward contingency and if they found the experience of having their gaze followed rewarding, their latency and success rate would improve for saccades to the high rewarded targets. Although our current study did not demonstrate that such social reward has a long lasting effect on gaze behaviour, we found that latencies became shorter over time and that latencies were longer on congruent trials (target location was identical to the previous trial) than on noncongruent trials (target location different than on the previous trial), which could reflect inhibition of return.


2021 ◽  
Author(s):  
Baiwei Liu ◽  
Anna C Nobre ◽  
Freek van Ede

Covert spatial attention is associated with spatially specific modulation of neural activity as well as with directional biases in fixational eye-movements known as microsaccades. Recently, this link has been suggested to be obligatory, such that modulation of neural activity by covert spatial attention occurs only when paired with microsaccades toward the attended location. Here we revisited this link between microsaccades and neural modulation by covert spatial attention in humans. We investigated spatial modulation of 8-12 Hz EEG alpha activity and microsaccades in a context with no incentive for overt gaze behaviour: when attention is directed internally within the spatial layout of visual working memory. In line with a common attentional origin, we show that spatial modulations of alpha activity and microsaccades co-vary: alpha lateralisation is stronger in trials with microsaccades toward compared to away from the memorised location of the to-be-attended item and occurs earlier in trials with earlier microsaccades toward this item. Critically, however, trials without attention-driven microsaccades nevertheless showed clear spatial modulation of alpha activity - comparable to the neural modulation observed in trials with attention-driven microsaccades. Thus, directional biases in microsaccades are correlated with neural signatures of covert spatial attention, but they are not a prerequisite for neural modulation by covert spatial attention to be manifest.


2021 ◽  
Author(s):  
Francesco Poli ◽  
Tommaso Ghilardi ◽  
Rogier B. Mars ◽  
Max Hinne ◽  
Sabine Hunnius

Infants learn to navigate the complexity of the physical and social world at an outstanding pace, but how they accomplish this learning is still unknown. Recent advances in human and artificial intelligence research propose that a key feature to achieve quick and efficient learning is meta-learning, the ability to make use of prior experiences to optimize how future information is acquired. Here we show that 8-month-old infants successfully engage in meta-learning within very short timespans. We developed a Bayesian model that captures how infants attribute informativity to incoming events, and how this process is optimized by the meta-parameters of their hierarchical models over the task structure. We fitted the model using infants’ gaze behaviour during a learning task. Our results reveal that infants do not simply accumulate experiences, but actively use them to generate new inductive biases that allow learning to proceed faster in the future.


Author(s):  
Hedda Martina Šola ◽  
Fayyaz Hussain Qureshi ◽  
Sarwar Khawaja

In recent years, the newly emerging discipline of neuromarketing, which employs brain (emotions and behaviour) research in an organisational context, has grown in prominence in academic and practice literature. With the increasing growth of online teaching, COVID-19 left no option for higher education institutions to go online. As a result, students who attend an online course are more prone to lose focus and attention, resulting in poor academic performance. Therefore, the primary purpose of this study is to observe the learner's behaviour while making use of an online learning platform. This study presents neuromarketing to enhance students' learning performance and motivation in an online classroom. Using a web camera, we used facial coding and eye-tracking techniques to study students' attention, motivation, and interest in an online classroom. In collaboration with Oxford Business College's marketing team, the Institute for Neuromarketing distributed video links via email, a student representative from Oxford Business College, the WhatsApp group, and a newsletter developed explicitly for that purpose to 297 students over the course of five days. To ensure the research was both realistic and feasible, the instructors in the videos were different, and students were randomly allocated to one video link lasting 90 seconds (n=142) and a second one lasting 10 minutes (n=155). An online platform for self-service called Tobii Sticky was used to measure facial coding and eye-tracking. During the 90-second online lecture, participants' gaze behaviour was tracked overtime to gather data on their attention distribution, and emotions were evaluated using facial coding. In contrast, the 10-minute film looked at emotional involvement. The findings show that students lose their listening focus when no supporting visual material or virtual board is used, even during a brief presentation. Furthermore, when they are exposed to a single shareable piece of content for longer than 5.24 minutes, their motivation and mood decline; however, when new shareable material or a class activity is introduced, their motivation and mood rise. JEL: I20; I21 <p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0805/a.php" alt="Hit counter" /></p>


2021 ◽  
Author(s):  
Scott A. Stone ◽  
Quinn A Boser ◽  
T Riley Dawson ◽  
Albert H Vette ◽  
Jacqueline S Hebert ◽  
...  

Assessing gaze behaviour during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make finding gaze fixations challenging. Current approaches involve laborious coding of pupil positions overlaid on video. One solution is to combine eye tracking with motion tracking to generate 3D gaze vectors. When combined with tracked or known object locations, fixation detection can be automated. Here we use combined eye and motion tracking and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three data types: the performance of calibration routines were assessed using calibration data, a validation task that demands short fixations on task-relevant locations, and an object interaction task we used to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and cartesian coordinate systems and monocular (Left or Right) or binocular data. Our results suggest that all calibration routines perform similarly, with the best performance (i.e., sub-centimeter errors) coming from the task (i.e., the most "natural") trials when the participant is looking at an object in front of them. Further, we found that spherical coordinate systems generate more accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend recording one-minute calibration datasets, using a binocular eye tracking headset (for redundancy), a spherical coordinate system when depth is not considered, and ensuring data quality (i.e., tracker positioning) is high when recording datasets.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Oliver Salazar ◽  
Simon Erridge ◽  
Jasmine Winter Beatty ◽  
Ara Darzi ◽  
Sanjay Purkayastha ◽  
...  

Abstract Aims Technical skill is associated with improved postoperative outcomes. Adoption of a formalised high-stakes assessment of surgical skill is technically challenging and limited by the financial and human resources available. We aimed to assess the ability to adopt gaze behaviour analysis as an assessment of surgical skill within live open inguinal herniorrhaphy. Methods Surgeons’ gaze was measured with Tobii Pro eye-tracking Glasses 2 (Tobii AB). All grades of surgeons were included. Primary outcomes were dwell time (%) and fixation frequency (count/s), as markers of cognition, on areas of interest correlated to mean Objective Skill Assessment of Technical Skill score. Secondary outcomes assessed effort and concentration levels through maximum pupil diameter (mm) and rate of pupil change (mm/s) correlated to perceived workload (SURG-TLX). Three operative segments underwent analysis: mesh preparation, fixation and muscle closure. Spearman’s and Pearson’s correlation were performed with significance set at p &lt; 0.05. Results 5 cases were analysed, totalling 270 minutes of video footage. All participants were senior surgical trainees and right-hand-dominant. The median number of hernia operations performed was 160 (range:100-500). The median ASA score of each patient participant was 2 (range:1-2). The median operation length was 45 mins (range:40-90 mins). There were no statistically significant primary outcomes from this pilot data (p &gt; 0.05). Conclusions This pilot study demonstrated the feasibility of recording gaze behaviours for comparison against formal skills assessment to determine the role of eye tracking in live high stakes technical skills assessment. A full study will now commence based on formal power calculation.


Sign in / Sign up

Export Citation Format

Share Document