The Compilation and Validation of a Collection of Emotional Expression Images Communicated by Synthetic and Human Faces

2013 ◽  
Vol 4 (2) ◽  
pp. 34-62
Author(s):  
Louise Lawrence ◽  
Deborah Abdel Nabi

The BARTA (Bolton Affect Recognition Tri-Stimulus Approach) is a unique database comprising over 400 colour images of the universally recognised basic emotional expressions and is the first compilation to include three different classes of validated face stimuli; emoticon, computer-generated cartoon and photographs of human faces. The validated tri-stimulus collection (all images received =70% inter-rater (child and adult) consensus) has been developed to promote pioneering research into the differential effects of synthetic emotion representation on atypical emotion perception, processing and recognition in autism spectrum disorders (ASD) and, given the recent evidence for an ASD synthetic-face processing advantage (Rosset et al., 2008), provides a means of investigating the benefits associated with the recruitment of synthetic face images in ASD emotion recognition training contexts.

Author(s):  
Aideen McParland ◽  
Stephen Gallagher ◽  
Mickey Keenan

AbstractA defining feature of ASD is atypical gaze behaviour, however, eye-tracking studies in ‘real-world’ settings are limited, and the possibility of improving gaze behaviour for ASD children is largely unexplored. This study investigated gaze behaviour of ASD and typically developing (TD) children in their classroom setting. Eye-tracking technology was used to develop and pilot an operant training tool to positively reinforce typical gaze behaviour towards faces. Visual and statistical analyses of eye-tracking data revealed different gaze behaviour patterns during live interactions for ASD and TD children depending on the interaction type. All children responded to operant training with longer looking times observed on face stimuli post training. The promising application of operant gaze training in ecologically valid settings is discussed.


Author(s):  
Eleonora Cannoni ◽  
Giuliana Pinto ◽  
Anna Silvia Bombi

AbstractThis study was aimed at verifying if children introduce emotional expressions in their drawings of human faces, and if a preferential expression exists; we also wanted to verify if children’s pictorial choices change with increasing age. To this end we examined the human figure drawings made by 160 boys and 160 girls, equally divided in 4 age groups: 6–7; 8–9; 10–11; 12–13 years; mean ages (SD in parentheses) were: 83,30 (6,54); 106,14 (7,16) 130,49 (8,26); 155,40 (6,66). Drawings were collected with the Draw-a-Man test instructions, i.e. without mentioning an emotional characterization. In the light of data from previous studies of emotion drawing on request, and the literature about preferred emotional expressions, we expected that an emotion would be portrayed even by the younger participants, and that the preferred emotion would be happiness. We also expected that with the improving ability to keep into account both mouth and eyes appearance, other expressions would be found besides the smiling face. Data were submitted to non-parametric tests to compare the frequencies of expressions (absolute and by age) and the frequencies of visual cues (absolute and by age and expressions). The results confirmed that only a small number of faces were expressionless, and that the most frequent emotion was happiness. However, with increasing age this representation gave way to a variety of basic emotions (sadness, fear, anger, surprise), whose representation may depend from the ability to modify the shapes of both eyes and mouth and changing communicative aims of the child.


Author(s):  
Benjamin Kreifelts ◽  
Thomas Ethofer

More often than not, emotion perception is a process guided by several sensory channels, accompanied by multimodal integration of emotional information. This process appears vital for effective social communication. This chapter provides an overview of recent studies describing the crossmodal integration of non-verbal emotional cues communicated via voices, faces, and bodies. The first parts of the chapter deal with the behavioural and neural correlates of multimodal integration in the healthy population using psychophysiological, electrophysiological, and neuroimaging measures, highlighting a network of brain areas involved in this process and discussing different methodological approaches. The final parts of the chapter, in contrast, are dedicated to the alterations of the multisensory integration of non-verbal emotional signals in states of psychiatric disease, with the main focus on schizophrenia and autism spectrum disorders.


2015 ◽  
Vol 29 (6) ◽  
pp. 895-908 ◽  
Author(s):  
Carly Demopoulos ◽  
Joyce Hopkins ◽  
Brandon E. Kopald ◽  
Kim Paulson ◽  
Lauren Doyle ◽  
...  

Neuroreport ◽  
2014 ◽  
Vol 25 (15) ◽  
pp. 1237-1241 ◽  
Author(s):  
Quentin Guillon ◽  
Nouchine Hadjikhani ◽  
Sophie Baduel ◽  
Jeanne Kruck ◽  
Mado Arnaud ◽  
...  

2010 ◽  
Vol 33 (6) ◽  
pp. 463-464 ◽  
Author(s):  
Piotr Winkielman

AbstractProcessing of facial expressions goes beyond simple pattern recognition. To elucidate this problem, Niedenthal et al. offer a model that identifies multiple embodied and disembodied routes for expression processing, and spell out conditions triggering use of different routes. I elaborate on this model by discussing recent research on emotional recognition in individuals with autism, who can use multiple routes of emotion processing, and consequently can show atypical and typical patterns of embodied simulation and mimicry.


2021 ◽  
Vol 11 (6) ◽  
pp. 734
Author(s):  
Tania Akter ◽  
Mohammad Hanif Ali ◽  
Md. Imran Khan ◽  
Md. Shahriare Satu ◽  
Md. Jamal Uddin ◽  
...  

Autism spectrum disorder (ASD) is a complex neuro-developmental disorder that affects social skills, language, speech and communication. Early detection of ASD individuals, especially children, could help to devise and strategize right therapeutic plan at right time. Human faces encode important markers that can be used to identify ASD by analyzing facial features, eye contact, and so on. In this work, an improved transfer-learning-based autism face recognition framework is proposed to identify kids with ASD in the early stages more precisely. Therefore, we have collected face images of children with ASD from the Kaggle data repository, and various machine learning and deep learning classifiers and other transfer-learning-based pre-trained models were applied. We observed that our improved MobileNet-V1 model demonstrates the best accuracy of 90.67% and the lowest 9.33% value of both fall-out and miss rate compared to the other classifiers and pre-trained models. Furthermore, this classifier is used to identify different ASD groups investigating only autism image data using k-means clustering technique. Thus, the improved MobileNet-V1 model showed the highest accuracy (92.10%) for k = 2 autism sub-types. We hope this model will be useful for physicians to detect autistic children more explicitly at the early stage.


2010 ◽  
Vol 1 (3) ◽  
Author(s):  
Roy Kessels ◽  
Pieter Spee ◽  
Angelique Hendriks

AbstractPrevious studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust results, since these resemble the expression of emotions in daily life to a greater extent. 30 Young adolescents with high-functioning ASD (IQ>85) and 30 age- and intelligence-matched controls (ages between 12 and 15) performed the Emotion Recognition Task, in which morphs were presented on a computer screen, depicting facial expressions of the six basic emotions (happiness, disgust, fear, anger, surprise and sadness) at nine levels of emotional intensity (20–100%). The results showed no overall group difference on the ERT, apart from a slightly worse performance on the perception of the emotions fear (p<0.03) and disgust (p<0.05). No interaction was found between intensity level of the emotions and group. High-functioning individuals with ASD perform similar to matched controls on the perception of dynamic facial emotional expressions, even at low intensities of emotional expression. These findings are in agreement with other recent studies showing that emotion perception deficits in high-functioning ASD may be less pronounced than previously thought.


Sign in / Sign up

Export Citation Format

Share Document