Response Patterns to Emotional Faces Among Adolescents Diagnosed With ADHD

2015 ◽  
Vol 22 (12) ◽  
pp. 1123-1130 ◽  
Author(s):  
Orrie Dan ◽  
Sivan Raz

Objective: The present study investigated differences in emotional face processing between adolescents (age 15-18) with ADHD-Combined type (ADHD-CT) and typically developing controls. Method: Participants completed a visual emotional task in which they were asked to rate the degree of negativity/positivity of four facial expressions (taken from the NimStim face stimulus set). Results: Participants’ ratings, ratings’ variability, response times (RTs), and RTs’ variability were analyzed. Results showed a significant interaction between group and the type of presented stimuli. Adolescents with ADHD-CT discriminated less between positive and negative emotional expressions compared with those without ADHD. In addition, adolescents with ADHD-CT exhibited greater variability in their RTs and in their ratings of facial expressions when compared with controls. Conclusion: The present results lend further support to the existence of a specific deficit or alteration in the processing of emotional face stimuli among adolescents with ADHD-CT.

2021 ◽  
Author(s):  
Kristina Safar

Experience is suggested to shape the development of emotion processing abilities in infancy. The current dissertation investigated the influence of familiarity with particular face types and emotional faces on emotional face processing within the first year of life using a variety of metrics. The first study examined whether experience with a particular face type (own- vs. other-race faces) affected 6- and 9-month-old infants’ attentional looking preference to fearful facial expressions in a visual paired-comparison (VPC) task. Six-month-old infants showed an attentional preference for fearful over happy facial expressions when expressed by own-race faces, but not other race-faces, whereas 9-month-old infants showed an attentional preference for fearful expressions when expressed by both own-race and other-race faces, suggesting that experience influences how infants deploy their attention to different facial expressions. Using a longitudinal design, the second study examined whether exposure to emotional faces via picture book training at 3 months of age affected infants’ allocation of attention to fearful over happy facial expressions in both a VPC and ERP task at 5 months of age. In the VPC task, 3- and 5-month-olds without exposure to emotional faces demonstrated greater allocation of attention to fearful facial expressions. Differential exposure to emotional faces revealed a potential effect of training: 5-month-olds infants who experienced fearful faces showed an attenuated preference for fearful facial expressions compared to infants who experienced happy faces or no training. Three- and 5-month-old infants did not, however, show differential neural processing of happy and fearful facial expressions. The third study examined whether 5- and 7-month-old infants can match fearful and happy faces and voices in an intermodal preference task, and whether exposure to happy or fearful faces influences this ability. Neither 5- nor 7-month-old infants showed intermodal matching of happy or fearful facial expressions, regardless of exposure to emotional faces. Overall, results from this series of studies add to our understanding of how experience influences the development of emotional face processing in infancy.


2021 ◽  
Author(s):  
Kristina Safar

Experience is suggested to shape the development of emotion processing abilities in infancy. The current dissertation investigated the influence of familiarity with particular face types and emotional faces on emotional face processing within the first year of life using a variety of metrics. The first study examined whether experience with a particular face type (own- vs. other-race faces) affected 6- and 9-month-old infants’ attentional looking preference to fearful facial expressions in a visual paired-comparison (VPC) task. Six-month-old infants showed an attentional preference for fearful over happy facial expressions when expressed by own-race faces, but not other race-faces, whereas 9-month-old infants showed an attentional preference for fearful expressions when expressed by both own-race and other-race faces, suggesting that experience influences how infants deploy their attention to different facial expressions. Using a longitudinal design, the second study examined whether exposure to emotional faces via picture book training at 3 months of age affected infants’ allocation of attention to fearful over happy facial expressions in both a VPC and ERP task at 5 months of age. In the VPC task, 3- and 5-month-olds without exposure to emotional faces demonstrated greater allocation of attention to fearful facial expressions. Differential exposure to emotional faces revealed a potential effect of training: 5-month-olds infants who experienced fearful faces showed an attenuated preference for fearful facial expressions compared to infants who experienced happy faces or no training. Three- and 5-month-old infants did not, however, show differential neural processing of happy and fearful facial expressions. The third study examined whether 5- and 7-month-old infants can match fearful and happy faces and voices in an intermodal preference task, and whether exposure to happy or fearful faces influences this ability. Neither 5- nor 7-month-old infants showed intermodal matching of happy or fearful facial expressions, regardless of exposure to emotional faces. Overall, results from this series of studies add to our understanding of how experience influences the development of emotional face processing in infancy.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Dina Tell ◽  
Denise Davidson ◽  
Linda A. Camras

Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.


Author(s):  
Michela Balconi

Neuropsychological studies have underlined the significant presence of distinct brain correlates deputed to analyze facial expression of emotion. It was observed that some cerebral circuits were considered as specific for emotional face comprehension as a function of conscious vs. unconscious processing of emotional information. Moreover, the emotional content of faces (i.e. positive vs. negative; more or less arousing) may have an effect in activating specific cortical networks. Between the others, recent studies have explained the contribution of hemispheres in comprehending face, as a function of type of emotions (mainly related to the distinction positive vs. negative) and of specific tasks (comprehending vs. producing facial expressions). Specifically, ERPs (event-related potentials) analysis overview is proposed in order to comprehend how face may be processed by an observer and how he can make face a meaningful construct even in absence of awareness. Finally, brain oscillations is considered in order to explain the synchronization of neural populations in response to emotional faces when a conscious vs. unconscious processing is activated.


2016 ◽  
Vol 29 (8) ◽  
pp. 749-771 ◽  
Author(s):  
Min Hooi Yong ◽  
Ted Ruffman

Dogs respond to human emotional expressions. However, it is unknown whether dogs can match emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52 domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion that matched one of them. Consistent with most matching studies, neither dogs nor infants looked longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus), with no preference for happyversusangry faces. Discussion focuses on why dogs and infants might have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.


2020 ◽  
Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


2021 ◽  
Author(s):  
Louisa Kulke ◽  
Lena Brümmer ◽  
Arezoo Pooresmaeili ◽  
Annekathrin Schacht

In everyday life, faces with emotional expressions quickly attract attention and eye-movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye-movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye-movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye-tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry or neutral expressions when eye-movements were either executed (Go conditions) or withheld (No-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC), were augmented in amplitude when attention was shifted with an eye-movement, indicating an enhanced neural processing of faces if eye-movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the Go and No-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content.


2010 ◽  
Vol 22 (3) ◽  
pp. 474-481 ◽  
Author(s):  
Disa Anna Sauter ◽  
Martin Eimer

The rapid detection of affective signals from conspecifics is crucial for the survival of humans and other animals; if those around you are scared, there is reason for you to be alert and to prepare for impending danger. Previous research has shown that the human brain detects emotional faces within 150 msec of exposure, indicating a rapid differentiation of visual social signals based on emotional content. Here we use event-related brain potential (ERP) measures to show for the first time that this mechanism extends to the auditory domain, using human nonverbal vocalizations, such as screams. An early fronto-central positivity to fearful vocalizations compared with spectrally rotated and thus acoustically matched versions of the same sounds started 150 msec after stimulus onset. This effect was also observed for other vocalized emotions (achievement and disgust), but not for affectively neutral vocalizations, and was linked to the perceived arousal of an emotion category. That the timing, polarity, and scalp distribution of this new ERP correlate are similar to ERP markers of emotional face processing suggests that common supramodal brain mechanisms may be involved in the rapid detection of affectively relevant visual and auditory signals.


2019 ◽  
Vol 31 (11) ◽  
pp. 1631-1640 ◽  
Author(s):  
Maria Kuehne ◽  
Isabelle Siwy ◽  
Tino Zaehle ◽  
Hans-Jochen Heinze ◽  
Janek S. Lobmaier

Facial expressions provide information about an individual's intentions and emotions and are thus an important medium for nonverbal communication. Theories of embodied cognition assume that facial mimicry and resulting facial feedback plays an important role in the perception of facial emotional expressions. Although behavioral and electrophysiological studies have confirmed the influence of facial feedback on the perception of facial emotional expressions, the influence of facial feedback on the automatic processing of such stimuli is largely unexplored. The automatic processing of unattended facial expressions can be investigated by visual expression-related MMN. The expression-related MMN reflects a differential ERP of automatic detection of emotional changes elicited by rarely presented facial expressions (deviants) among frequently presented facial expressions (standards). In this study, we investigated the impact of facial feedback on the automatic processing of facial expressions. For this purpose, participants ( n = 19) performed a centrally presented visual detection task while neutral (standard), happy, and sad faces (deviants) were presented peripherally. During the task, facial feedback was manipulated by different pen holding conditions (holding the pen with teeth, lips, or nondominant hand). Our results indicate that automatic processing of facial expressions is influenced and thus dependent on the own facial feedback.


Sign in / Sign up

Export Citation Format

Share Document