scholarly journals Efficient visual search for facial emotions in patients with major depression

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Felix Czepluch ◽  
Anette Kersting ◽  
Thomas Suslow

Abstract Background Major depressive disorder has been associated with specific attentional biases in processing emotional facial expressions: heightened attention for negative and decreased attention for positive faces. However, using visual search paradigms, previous reaction-time-based research failed, in general, to find evidence for increased spatial attention toward negative facial expressions and reduced spatial attention toward positive facial expressions in depressed individuals. Eye-tracking analyses allow for a more detailed examination of visual search processes over time during the perception of multiple stimuli and can provide more specific insights into the attentional processing of multiple emotional stimuli. Methods Gaze behavior of 38 clinically depressed individuals and 38 gender matched healthy controls was compared in a face-in-the-crowd task. Pictures of happy, angry, and neutral facial expressions were utilized as target and distractor stimuli. Four distinct measures of eye gaze served as dependent variables: (a) latency to the target face, (b) number of distractor faces fixated prior to fixating the target, (c) mean fixation time per distractor face before fixating the target and (d) mean fixation time on the target. Results Depressed and healthy individuals did not differ in their manual response times. Our eye-tracking data revealed no differences between study groups in attention guidance to emotional target faces as well as in the duration of attention allocation to emotional distractor and target faces. However, depressed individuals fixated fewer distractor faces before fixating the target than controls, regardless of valence of expressions. Conclusions Depressed individuals seem to process angry and happy expressions in crowds of faces mainly in the same way as healthy individuals. Our data indicate no biased attention guidance to emotional targets and no biased processing of angry and happy distractors and targets in depression during visual search. Under conditions of clear task demand depressed individuals seem to be able to allocate and guide their attention in crowds of angry and happy faces as efficiently as healthy individuals.

2021 ◽  
Vol 12 ◽  
Author(s):  
Thomas Suslow ◽  
Vivien Günther ◽  
Tilman Hensch ◽  
Anette Kersting ◽  
Charlott Maria Bodenschatz

Background: The concept of alexithymia is characterized by difficulties identifying and describing one's emotions. Alexithymic individuals are impaired in the recognition of others' emotional facial expressions. Alexithymia is quite common in patients suffering from major depressive disorder. The face-in-the-crowd task is a visual search paradigm that assesses processing of multiple facial emotions. In the present eye-tracking study, the relationship between alexithymia and visual processing of facial emotions was examined in clinical depression.Materials and Methods: Gaze behavior and manual response times of 20 alexithymic and 19 non-alexithymic depressed patients were compared in a face-in-the-crowd task. Alexithymia was empirically measured via the 20-item Toronto Alexithymia-Scale. Angry, happy, and neutral facial expressions of different individuals were shown as target and distractor stimuli. Our analyses of gaze behavior focused on latency to the target face, number of distractor faces fixated before fixating the target, number of target fixations, and number of distractor faces fixated after fixating the target.Results: Alexithymic patients exhibited in general slower decision latencies compared to non-alexithymic patients in the face-in-the-crowd task. Patient groups did not differ in latency to target, number of target fixations, and number of distractors fixated prior to target fixation. However, after having looked at the target, alexithymic patients fixated more distractors than non-alexithymic patients, regardless of expression condition.Discussion: According to our results, alexithymia goes along with impairments in visual processing of multiple facial emotions in clinical depression. Alexithymia appears to be associated with delayed manual reaction times and prolonged scanning after the first target fixation in depression, but it might have no impact on the early search phase. The observed deficits could indicate difficulties in target identification and/or decision-making when processing multiple emotional facial expressions. Impairments of alexithymic depressed patients in processing emotions in crowds of faces seem not limited to a specific affective valence. In group situations, alexithymic depressed patients might be slowed in processing interindividual differences in emotional expressions compared with non-alexithymic depressed patients. This could represent a disadvantage in understanding non-verbal communication in groups.


2021 ◽  
pp. 1-21
Author(s):  
Michael Vesker ◽  
Daniela Bahn ◽  
Christina Kauschke ◽  
Gudrun Schwarzer

Abstract Social interactions often require the simultaneous processing of emotions from facial expressions and speech. However, the development of the gaze behavior used for emotion recognition, and the effects of speech perception on the visual encoding of facial expressions is less understood. We therefore conducted a word-primed face categorization experiment, where participants from multiple age groups (six-year-olds, 12-year-olds, and adults) categorized target facial expressions as positive or negative after priming with valence-congruent or -incongruent auditory emotion words, or no words at all. We recorded our participants’ gaze behavior during this task using an eye-tracker, and analyzed the data with respect to the fixation time toward the eyes and mouth regions of faces, as well as the time until participants made the first fixation within those regions (time to first fixation, TTFF). We found that the six-year-olds showed significantly higher accuracy in categorizing congruently primed faces compared to the other conditions. The six-year-olds also showed faster response times, shorter total fixation durations, and faster TTFF measures in all primed trials, regardless of congruency, as compared to unprimed trials. We also found that while adults looked first, and longer, at the eyes as compared to the mouth regions of target faces, children did not exhibit this gaze behavior. Our results thus indicate that young children are more sensitive than adults or older children to auditory emotion word primes during the perception of emotional faces, and that the distribution of gaze across the regions of the face changes significantly from childhood to adulthood.


Author(s):  
Priya Seshadri ◽  
Youyi Bi ◽  
Jaykishan Bhatia ◽  
Ross Simons ◽  
Jeffrey Hartley ◽  
...  

This study is the first stage of a research program aimed at understanding differences in how people process 2D and 3D automotive stimuli, using psychophysiological tools such as galvanic skin response (GSR), eye tracking, electroencephalography (EEG), and facial expressions coding, along with respondent ratings. The current study uses just one measure, eye tracking, and one stimulus format, 2D realistic renderings of vehicles, to reveal where people expect to find information about brand and other industry-relevant topics, such as sportiness. The eye-gaze data showed differences in the percentage of fixation time that people spent on different views of cars while evaluating the “Brand” and the degree to which they looked “Sporty/Conservative”, “Calm/Exciting”, and “Basic/Luxurious”. The results of this work can give designers insights on where they can invest their design efforts when considering brand and styling cues.


2020 ◽  
Author(s):  
Christine Krebs ◽  
Michael Falkner ◽  
Joel Niklaus ◽  
Luca Persello ◽  
Stefan Klöppel ◽  
...  

BACKGROUND Recent studies suggest that computerized puzzle games are enjoyable, easy to play, and engage attentional, visuospatial, and executive functions. They may help mediate impairments seen in cognitive decline in addition to being an assessment tool. Eye tracking provides a quantitative and qualitative analysis of gaze, which is highly useful in understanding visual search behavior. OBJECTIVE The goal of the research was to test the feasibility of eye tracking during a puzzle game and develop adjunct markers for cognitive performance using eye-tracking metrics. METHODS A desktop version of the Match-3 puzzle game with 15 difficulty levels was developed using Unity 3D (Unity Technologies). The goal of the Match-3 puzzle was to find configurations (target patterns) that could be turned into a row of 3 identical game objects (tiles) by swapping 2 adjacent tiles. Difficulty levels were created by manipulating the puzzle board size (all combinations of width and height from 4 to 8) and the number of unique tiles on the puzzle board (from 4 to 8). Each level consisted of 4 boards (ie, target patterns to match) with one target pattern each. In this study, the desktop version was presented on a laptop computer setup with eye tracking. Healthy older subjects were recruited to play a full set of 15 puzzle levels. A paper-pencil–based assessment battery was administered prior to the Match-3 game. The gaze behavior of all participants was recorded during the game. Correlation analyses were performed on eye-tracking data correcting for age to examine if gaze behavior pertains to target patterns and distractor patterns and changes with puzzle board size (set size). Additionally, correlations between cognitive performance and eye movement metrics were calculated. RESULTS A total of 13 healthy older subjects (mean age 70.67 [SD 4.75] years; range 63 to 80 years) participated in this study. In total, 3 training and 12 test levels were played by the participants. Eye tracking recorded 672 fixations in total, 525 fixations on distractor patterns and 99 fixations on target patterns. Significant correlations were found between executive functions (Trail Making Test B) and number of fixations on distractor patterns (<i>P</i>=.01) and average fixations (<i>P</i>=.005). CONCLUSIONS Overall, this study shows that eye tracking in puzzle games can act as a supplemental source of data for cognitive performance. The relationship between a paper-pencil test for executive functions and fixations confirms that both are related to the same cognitive processes. Therefore, eye movement metrics might be used as an adjunct marker for cognitive abilities like executive functions. However, further research is needed to evaluate the potential of the various eye movement metrics in combination with puzzle games as visual search and attentional marker.


2021 ◽  
Author(s):  
Leah R. Enders ◽  
Robert J. Smith ◽  
Stephen M. Gordon ◽  
Anthony J. Ries ◽  
Jonathan Touryan

Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, vision researchers are beginning to use virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while participants freely navigated through a complex virtual environment. Within this environment, participants completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target groups: Humvees, motorcycles, aircraft, or furniture. Our results show a significant relationship between gaze behavior and target objects across subject groups. Specifically, we see an increased number of fixations and increase dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search patterns changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings from more controlled laboratory settings and demonstrate that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


Sign in / Sign up

Export Citation Format

Share Document