The Effect of Sleep Deprivation on Recognition of Ambiguous Emotional Facial Expressions in Individuals With ADHD

2018 ◽  
Vol 24 (4) ◽  
pp. 565-575 ◽  
Author(s):  
Orrie Dan ◽  
Iris Haimov ◽  
Kfir Asraf ◽  
Kesem Nachum ◽  
Ami Cohen

Objective: The present study sought to investigate whether young adults with ADHD have more difficulty recognizing emotional facial expressions compared with young adults without ADHD, and whether such a difference worsens following sleep deprivation. Method: Thirty-one young men ( M = 25.6) with ( n = 15) or without ( n = 16) a diagnosis of ADHD were included in this study. The participants were instructed to sleep 7 hr or more each night for one week, and their sleep quality was monitored via actigraph. Subsequently, the participants were kept awake in a controlled environment for 30 hr. The participants completed a visual emotional morph task twice—at the beginning and at the end of this period. The task included presentation of interpolated face stimuli ranging from neutral facial expressions to fully emotional facial expressions of anger, sadness, or happiness, allowing for assessment of the intensity threshold for recognizing these facial emotional expressions. Results: Actigraphy data demonstrated that while the nightly sleep duration of the participants with ADHD was similar to that of participants without ADHD, their sleep efficiency was poorer. At the onset of the experiment, there were no differences in recognition thresholds between the participants with ADHD and those without ADHD. Following sleep deprivation, however, the ADHD group required clearer facial expressions to recognize the presence of angry, sad, and, to a lesser extent, happy faces. Conclusion: Among young adults with ADHD, sleep deprivation may hinder the processing of emotional facial stimuli.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ami Cohen ◽  
Kfir Asraf ◽  
Ivgeny Saveliev ◽  
Orrie Dan ◽  
Iris Haimov

AbstractThe ability to recognize emotions from facial expressions is essential to the development of complex social cognition behaviors, and impairments in this ability are associated with poor social competence. This study aimed to examine the effects of sleep deprivation on the processing of emotional facial expressions and nonfacial stimuli in young adults with and without attention-deficit/hyperactivity disorder (ADHD). Thirty-five men (mean age 25.4) with (n = 19) and without (n = 16) ADHD participated in the study. During the five days preceding the experimental session, the participants were required to sleep at least seven hours per night (23:00/24:00–7:00/9:00) and their sleep was monitored via actigraphy. On the morning of the experimental session, the participants completed a 4-stimulus visual oddball task combining facial and nonfacial stimuli, and repeated it after 25 h of sustained wakefulness. At baseline, both study groups had poorer performance in response to facial rather than non-facial target stimuli on all indices of the oddball task, with no differences between the groups. Following sleep deprivation, rates of omission errors, commission errors and reaction time variability increased significantly in the ADHD group but not in the control group. Time and target type (face/non-face) did not have an interactive effect on any indices of the oddball task. Young adults with ADHD are more sensitive to the negative effects of sleep deprivation on attentional processes, including those related to the processing of emotional facial expressions. As poor sleep and excessive daytime sleepiness are common in individuals with ADHD, it is feasible that poor sleep quality and quantity play an important role in cognitive functioning deficits, including the processing of emotional facial expressions that are associated with ADHD.


2020 ◽  
Author(s):  
Sjoerd Stuit ◽  
Timo Kootstra ◽  
David Terburg ◽  
Carlijn van den Boomen ◽  
Maarten van der Smagt ◽  
...  

Abstract Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual features predict selection between two faces. Interestingly, the identified features serve as better predictors than the semantic label of the expressions. We therefore propose that our modelling approach can further specify which visual features drive the behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.


1986 ◽  
Vol 62 (2) ◽  
pp. 419-423 ◽  
Author(s):  
Gilles Kirouac ◽  
Martin Bouchard ◽  
Andrée St-Pierre

The purpose of this study was to measure the capacity of human subjects to match facial expressions of emotions and behavioral categories that represented the motivational states they are supposed to illustrate. 100 university students were shown facial stimuli they had to classify using ethological behavioral categories. The results showed that accuracy of judgment was over-all lower than what was usually found when fundamental emotional categories were used. The data also indicated that the relation between emotional expressions and behavioral tendencies was more complex than expected.


2007 ◽  
Vol 38 (10) ◽  
pp. 1475-1483 ◽  
Author(s):  
K. S. Kendler ◽  
L. J. Halberstadt ◽  
F. Butera ◽  
J. Myers ◽  
T. Bouchard ◽  
...  

BackgroundWhile the role of genetic factors in self-report measures of emotion has been frequently studied, we know little about the degree to which genetic factors influence emotional facial expressions.MethodTwenty-eight pairs of monozygotic (MZ) and dizygotic (DZ) twins from the Minnesota Study of Twins Reared Apart were shown three emotion-inducing films and their facial responses recorded. These recordings were blindly scored by trained raters. Ranked correlations between twins were calculated controlling for age and sex.ResultsTwin pairs were significantly correlated for facial expressions of general positive emotions, happiness, surprise and anger, but not for general negative emotions, sadness, or disgust or average emotional intensity. MZ pairs (n=18) were more correlated than DZ pairs (n=10) for most but not all emotional expressions.ConclusionsSince these twin pairs had minimal contact with each other prior to testing, these results support significant genetic effects on the facial display of at least some human emotions in response to standardized stimuli. The small sample size resulted in estimated twin correlations with very wide confidence intervals.


2016 ◽  
Vol 29 (8) ◽  
pp. 749-771 ◽  
Author(s):  
Min Hooi Yong ◽  
Ted Ruffman

Dogs respond to human emotional expressions. However, it is unknown whether dogs can match emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52 domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion that matched one of them. Consistent with most matching studies, neither dogs nor infants looked longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus), with no preference for happyversusangry faces. Discussion focuses on why dogs and infants might have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.


2002 ◽  
Vol 8 (1) ◽  
pp. 130-135 ◽  
Author(s):  
JOCELYN M. KEILLOR ◽  
ANNA M. BARRETT ◽  
GREGORY P. CRUCIAN ◽  
SARAH KORTENKAMP, ◽  
KENNETH M. HEILMAN

The facial feedback hypothesis suggests that facial expressions are either necessary or sufficient to produce emotional experience. Researchers have noted that the ideal test of the necessity aspect of this hypothesis would be an evaluation of emotional experience in a patient suffering from a bilateral facial paralysis; however, this condition is rare and no such report has been documented. We examined the role of facial expressions in the determination of emotion by studying a patient (F.P.) suffering from a bilateral facial paralysis. Despite her inability to convey emotions through facial expressions, F.P. reported normal emotional experience. When F.P. viewed emotionally evocative slides her reactions were not dampened relative to the normative sample. F.P. retained her ability to detect, discriminate, and image emotional expressions. These findings are not consistent with theories stating that feedback from an active face is necessary to experience emotion, or to process emotional facial expressions. (JINS, 2002, 8, 130–135.)


2007 ◽  
Vol 21 (2) ◽  
pp. 100-108 ◽  
Author(s):  
Michela Balconi ◽  
Claudio Lucchiari

Abstract. In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10 ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition.


Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

Abstract. In this experiment, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multilab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the “pen-in-the-teeth” condition, participants tended to lower their threshold of perception of happy expressions in facial stimuli compared to the “no-pen” condition, thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive happy walkers in the pen-in-the-teeth condition compared to the no-pen condition. This pattern of results was also found in a second experiment in which the no-pen condition was replaced by a situation in which participants held a pen in their lips (“pen-in-lips” condition). These results suggested that facial muscular activity alters the recognition of not only facial expressions but also bodily expressions.


2020 ◽  
Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

In this research, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multi-lab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the ‘pen-in-the-teeth’ condition, participants tended to lower their threshold of perception of ‘happy’ expressions in facial stimuli compared to the ‘no-pen’ condition; thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive ‘happy’ walkers in the ‘pen-in-the-teeth’ compared to the ‘no-pen’ condition. This pattern of results was also found in a second experiment in which the ‘no-pen’ condition was replaced by a situation in which participants held a pen in their lips (‘pen-in-lips’ condition). These results suggested that facial muscular activity not only alters the recognition of facial expressions but also bodily expression.


Sign in / Sign up

Export Citation Format

Share Document