How do People look at Pictures of Pigs? Analyzing Fixation Duration Depending on Pig Expression and Barn Type using Eye-Tracking

2020 ◽  
Vol 69 (4) ◽  
pp. 300-310
Author(s):  
Sarah Gauly ◽  
Gesa Busch ◽  
Achim Spiller ◽  
Ulrich Enneking ◽  
Susanne Kunde ◽  
...  

Using eye-tracking, this study investigates fixation duration of students viewing pictures of pigs, which systematically vary in the facial expression of the pig and in the barn setting. The aim of this study is to analyze which picture elements are viewed and for how long, as well as how fixation times vary with a change of the expression of the pig and the barn type. The results show clear effects of picture composition: pig expression and pen type affect fixation durations of different areas of interest with the influence of the pig being considerably larger. Face regions are viewed longer in the “happy” pig, while floor/bedding and the eyes are viewed longer in pictures showing the “unhappy” pig which might be a hint for information search. The power of facial expressions, also for the depiction of farm animals, is a new finding of this paper, which might be of importance when selecting agricultural pictures for different purposes.

2018 ◽  
Vol 36 (6) ◽  
pp. 1027-1042 ◽  
Author(s):  
Quan Lu ◽  
Jiyue Zhang ◽  
Jing Chen ◽  
Ji Li

Purpose This paper aims to examine the effect of domain knowledge on eye-tracking measures and predict readers’ domain knowledge from these measures in a navigational table of contents (N-TOC) system. Design/methodology/approach A controlled experiment of three reading tasks was conducted in an N-TOC system for 24 postgraduates of Wuhan University. Data including fixation duration, fixation count and inter-scanning transitions were collected and calculated. Participants’ domain knowledge was measured by pre-experiment questionnaires. Logistic regression analysis was leveraged to build the prediction model and the model’s performance was evaluated based on baseline model. Findings The results showed that novices spent significantly more time in fixating on text area than experts, because of the difficulty of understanding the information of text area. Total fixation duration on text area (TFD_T) was a significantly negative predictor of domain knowledge. The prediction performance of logistic regression model using eye-tracking measures was better than baseline model, with the accuracy, precision and F(β = 1) scores to be 0.71, 0.86, 0.79. Originality/value Little research has been reported in literature on investigation of domain knowledge effect on eye-tracking measures during reading and prediction of domain knowledge based on eye-tracking measures. Most studies focus on multimedia learning. With respect to the prediction of domain knowledge, only some studies are found in the field of information search. This paper makes a good contribution to the literature on the effect of domain knowledge on eye-tracking measures during N-TOC reading and predicting domain knowledge.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2013 ◽  
Vol 6 (4) ◽  
Author(s):  
Banu Cangöz ◽  
Arif Altun ◽  
Petek Aşkar ◽  
Zeynel Baran ◽  
Sacide Güzin Mazman

The main objective of the study is to investigate the effects of age of model, gender of observer, and lateralization on visual screening patterns while looking at the emotional facial expressions. Data were collected through eye tracking methodology. The areas of interests were set to include eyes, nose and mouth. The selected eye metrics were first fixation duration, fixation duration and fixation count. Those eye tracking metrics were recorded for different emotional expressions (sad, happy, neutral), and conditions (the age of model, part of face and lateralization). The results revealed that participants looked at the older faces shorter in time and fixated their gaze less compared to the younger faces. This study also showed that when participants were asked to passively look at the face expressions, eyes were important areas in determining sadness and happiness, whereas eyes and noise were important in determining neutral expression. The longest fixated face area was on eyes for both young and old models. Lastly, hemispheric lateralization hypothesis regarding emotional face process was supported.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0252398
Author(s):  
Janosch A. Priebe ◽  
Claudia Horn-Hofmann ◽  
Daniel Wolf ◽  
Stefanie Wolff ◽  
Michael Heesen ◽  
...  

Altered attentional processing of pain-associated stimuli–which might take the form of either avoidance or enhanced vigilance–is thought to be implicated in the development and maintenance of chronic pain. In contrast to reaction time tasks like the dot probe, eye tracking allows for tracking the time course of visual attention and thus differentiating early and late attentional processes. Our study aimed at investigating visual attention to emotional faces in patients with chronic musculoskeletal pain (N = 20) and matched pain-free controls (N = 20). Emotional faces (pain, angry, happy) were presented in pairs with a neutral face for 2000 ms each. Three parameters were determined: First fixation probabilities, fixation durations (overall and divided in four 500 ms intervals) and a fixation bias score as the relative fixation duration of emotional faces compared to neutral faces. There were no group differences in any of the parameters. First fixation probabilities were lower for pain faces than for angry faces. Overall, we found longer fixation duration on emotional compared to neutral faces (‘emotionality bias’), which is in accord with previous research. However, significant longer fixation duration compared to the neutral face was detected only for happy and angry but not for pain faces. In addition, fixation durations as well as bias scores yielded evidence for vigilant-avoidant processing of pain faces in both groups. These results suggest that attentional bias towards pain-associated stimuli might not generally differentiate between healthy individuals and chronic pain patients. Exaggerated attentional bias in patients might occur only under specific circumstances, e.g., towards stimulus material specifically relating to the specific pain of the patients under study or under high emotional distress.


2021 ◽  
pp. 39-60
Author(s):  
Sylwester Białowąs ◽  
Adrianna Szyszka

Eye movements provide information on subconscious reactions in response to stimuli and are a reflection of attention and focus. With regard to visual activity, four types of eye movements—fixations, saccades, smooth pursuits and blinks—can be distinguished. Fixations—the number and distribution, total fixation time or average fixation duration are among the most common measures. The capabilities of this research method also allow the determination of scanpaths that track gaze on the image as well as heat- and focus maps, which visually represent points of gaze focus. A key concept in eye-tracking that allows for more in-depth analysis is areas of interest (AOI)—measures can then be taken for selected parts of the visual stimulus. On the other hand, the area of gaze outside the scope of analysis is called white space. The software allows for comparisons of static and non-static stimuli and provides a choice of template, dataset, metrics or data format. In conducting eye-tracking research, proper calibration is crucial, which means that the participant’s gaze should be adjusted to the internal model of the eye-tracking software. In addition, attention should be paid to such aspects as time and spatial control. The exposure time for each participant should be identical. The testing space should be well-lit and at a comfortable temperature.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Shaoqi Jiang ◽  
Weijiong Chen ◽  
Yutao Kang

To maintain situation awareness (SA) when exposed to emergencies during pilotage, a pilot needs to selectively allocate attentional resources to perceive critical status information about ships and environments. Although it is important to continuously monitor a pilot’s SA, its relationship with attention is still not fully understood in ship pilotage. This study performs bridge simulation experiments that include vessel departure, navigation in the fairway, encounters, poor visibility, and anchoring scenes with 13 pilots (mean = 11.3 and standard deviation = 1.4 of experience). Individuals were divided into two SA group levels based on the Situation Awareness Rating Technology (SART-2) score (mean = 20.13 and standard deviation = 5.83) after the experiments. The visual patterns using different SA groups were examined using heat maps and scan paths based on pilots’ fixations and saccade data. The preliminary visual analyses of the heat maps and scan paths indicate that the pilots’ attentional distribution is modulated by the SA level. That is, the most concerning areas of interest (AOIs) for pilots in the high and low SA groups are outside the window (AOI-2) and electronic charts (AOI-1), respectively. Subsequently, permutation simulations were utilized to identify statistical differences between the pilots’ eye-tracking metrics and SA. The results of the statistical analyses show that the fixation and saccade metrics are affected by the SA level in different AOIs across the five scenes, which confirms the findings of previous studies. In encounter scenes, the pilots’ SA level is correlated with the fixation and saccade metrics: fixation count ( p  = 0.034 < 0.05 in AOI-1 and p  = 0.032 < 0.05 in AOI-2), fixation duration ( p  = 0.043 < 0.05 in AOI-1 and p  = 0.014 < 0.05 in AOI-2), and saccade count ( p  = 0.086 < 0.1 in AOI-1 and p  = 0.054 < 0.1 in AOI-2). This was determined by the fixation count ( p  = 0.024 < 0.05 in AOI-1 and p  = 0.034 < 0.05 in AOI-2), fixation duration ( p  = 0.036 < 0.05 in AOI-1 and p  = 0.047 < 0.05 in AOI-2), and saccade duration ( p  = 0.05 ≤ 0.05 in AOI-1 and p  = 0.042 < 0.05 in AOI-2) in poor-visibility scenes. In the remaining scenes, the SA could not be measured using eye movements alone. This study lays a foundation for the cognitive mechanism recognition of pilots based on SA via eye-tracking technology, which provides a reference to establish cognitive competency standards in preliminary pilot screenings.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0247808
Author(s):  
Lexis H. Ly ◽  
Daniel M. Weary

People often express concern for the welfare of farm animals, but research on this topic has relied upon self-report. Facial expressions provide a quantifiable measure of emotional response that may be less susceptible to social desirability bias and other issues associated with self-report. Viewing other humans in pain elicits facial expressions indicative of empathy. Here we provide the first evidence that this measure can also be used to assess human empathetic responses towards farm animals, showing that facial expressions respond reliably when participants view videos of farm animals undergoing painful procedures. Participants (n = 30) were asked to watch publicly sourced video clips of cows and pigs undergoing common management procedures (e.g. disbudding, castration, tail docking) and control videos (e.g. being lightly restrained, standing). Participants provided their subjective rating of the intensity of 5 negative emotions (pain, sadness, anger, fear, disgust) on an 11-point Likert scale. Videos of the participants (watching the animals) were scored for intensity of unpleasantness of the participants’ facial expression (also on an 11-point Likert scale) by a trained observer who was blind to treatment. Participants showed more intense facial expressions while viewing painful procedures versus control procedures (mean ± SE Likert; 2.4 ± 0.08 versus 0.6 ± 0.17). Participants who reported more intense negative responses also showed stronger facial expressions (slope ± SE = 0.4 ± 0.04). Both the self-reported and facial measures varied with species and procedure witnessed. These results indicate that facial expressions can be used to assess human-animal empathy.


2021 ◽  
Vol 3 ◽  
Author(s):  
Mildred Loiseau-Taupin ◽  
Alexis Ruffault ◽  
Jean Slawinski ◽  
Lucile Delabarre ◽  
Dimitri Bayle

In badminton, the ability to quickly gather relevant visual information is one of the most important determinants of performance. However, gaze behavior has never been investigated in a real-game setting (with fatigue), nor related to performance. The aim of this study was to evaluate the effect of fatigue on gaze behavior during a badminton game setting, and to determine the relationship between fatigue, performance and gaze behavior. Nineteen novice badminton players equipped with eye-tracking glasses played two badminton sets: one before and one after a fatiguing task. The duration and number of fixations for each exchange were evaluated for nine areas of interest. Performance in terms of points won or lost and successful strokes was not impacted by fatigue, however fatigue induced more fixations per exchange on two areas of interest (shuttlecock and empty area after the opponent's stroke). Furthermore, two distinct gaze behaviors were found for successful and unsuccessful performance: points won were associated with fixations on the boundary lines and few fixation durations on empty area before the participant's stroke; successful strokes were related to long fixation durations, few fixation durations on empty area and a large number of fixations on the shuttlecock, racket, opponent's upper body and anticipation area. This is the first study to use a mobile eye-tracking system to capture gaze behavior during a real badminton game setting: fatigue induced changes in gaze behavior, and successful and unsuccessful performance were associated with two distinct gaze behaviors.


2018 ◽  
Vol 5 (2) ◽  
pp. 73-82 ◽  
Author(s):  
Ubaldo Cuesta ◽  
Luz Martínez-Martínez ◽  
Jose Ignacio Niño

Abstract Music plays an important role in advertising. It exerts strong influence on the cognitive processes of attention and on the emotional processes of evaluation and, subsequently, in the attributes of the product. The goal of this work was to investigate these mechanisms using eye-tracking, facial expression and galvanic skin response (GSR).Nineteen university women were exposed to the same TV ad of a perfume in our Laboratory (https://neurolabcenter.com/). Nine of them were randomly assigned to the music version and ten to the silent version. During viewing, the visual areas of interest, the fixation time, the facial emotions and the GSR were recorded. Before and after viewing the subjects completed a questionnaire. Results: 1) The commercial with music caused a GSR level higher than without music. The GSR evaluates the degree of arousal (emotion)., 2) The facial expression indicated that the variable "enjoy" and "engagement" were significantly higher in the version with music. The positive valence (liking) presented higher values in the musical version, 3) However, the evaluation of the variable "attention", measured through facial expression, did not show differences between the groups. There were also no differences in the heat maps of areas of interest. 4) The attributes evaluation of the product, measured with the pre-post questionnaire, showed greater increases after exposure to the musical version, but only in specific product’s attributes, such as "power" but not on other attributes, such as "status". These results are interpreted within the framework of the recent theories of advertising and music (Oakes, 2007).


2020 ◽  
Vol 13 (6) ◽  
Author(s):  
Shivsevak Negi ◽  
Ritayan Mitra

Learning is a complex phenomenon and education researchers are increasingly focussing on processes that go into it. Eye tracking has become an important tool in such research. In this paper, we focus on one of the most commonly used metrics in eye tracking, namely, fixation duration. Fixation duration has been used to study cognition and attention. However, fixation duration distributions are characteristically non-normal and heavily skewed to the right. Therefore, the use of a single average value, such as the mean fixation duration, to predict cognition and/or attention could be problematic. This is especially true in studies of complex constructs, such as learning, which are governed by both cognitive and affective processes. We collected eye tracking data from 51 students watching a 12 min long educational video with and without subtitles. The learning gain after watching the video was calculated with pre- and post-test scores. Several multiple linear regression models revealed a) fixation duration can explain a substantial fraction of variation in the pre-post data, which indicates its usefulness in the study of learning processes; b) the arithmetic mean of fixation durations, which is the most commonly reported eye tracking metric, may not be the optimal choice; and c) a phenomenological model of fixation durations where the number of fixations over different temporal ranges are used as inputs seemed to perform the best. The results and their implications for learning process research are discussed.


Sign in / Sign up

Export Citation Format

Share Document