scholarly journals Creation and validation of the Picture-Set of Young Children’s Affective Facial Expressions (PSYCAFE)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260871
Author(s):  
Matthias Franz ◽  
Tobias Müller ◽  
Sina Hahn ◽  
Daniel Lundqvist ◽  
Dirk Rampoldt ◽  
...  

The immediate detection and correct processing of affective facial expressions are one of the most important competences in social interaction and thus a main subject in emotion and affect research. Generally, studies in these research domains, use pictures of adults who display affective facial expressions as experimental stimuli. However, for studies investigating developmental psychology and attachment behaviour it is necessary to use age-matched stimuli, where it is children that display affective expressions. PSYCAFE represents a newly developed picture-set of children’s faces. It includes reference portraits of girls and boys aged 4 to 6 years averaged digitally from different individual pictures, that were categorized to six basic affects (fear, disgust, happiness, sadness, anger and surprise) plus a neutral facial expression by cluster analysis. This procedure led to deindividualized and affect prototypical portraits. Individual affect expressive portraits of adults from an already validated picture-set (KDEF) were used in a similar way to create affect prototypical images also of adults. The stimulus set has been validated on human observers and entail emotion recognition accuracy rates and scores for intensity, authenticity and likeability ratings of the specific affect displayed. Moreover, the stimuli have also been characterized by the iMotions Facial Expression Analysis Module, providing additional data on probability values representing the likelihood that the stimuli depict the expected affect. Finally, the validation data from human observers and iMotions are compared to data on facial mimicry of healthy adults in response to these portraits, measured by facial EMG (m. zygomaticus major and m. corrugator supercilii).

2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


Author(s):  
Abdolhossein Sarrafzadeh ◽  
Samuel T.V. Alexander ◽  
Jamshid Shanbehzadeh

Intelligent tutoring systems (ITS) are still not as effective as one-on-one human tutoring. The next generation of intelligent tutors are expected to be able to take into account the emotional state of students. This paper presents research on the development of an Affective Tutoring System (ATS). The system called “Easy with Eve” adapts to students via a lifelike animated agent who is able to detect student emotion through facial expression analysis, and can display emotion herself. Eve’s adaptations are guided by a case-based method for adapting to student states; this method uses data that was generated by an observational study of human tutors. This paper presents an analysis of facial expressions of students engaged in learning with human tutors and how a facial expression recognition system, a life like agent and a case based system based on this analysis have been integrated to develop an ATS for mathematics.


Organized in eleven thematic sections, The Science of Facial Expression offers a broad perspective of the “geography” of the science of facial expression. It reviews the scientific history of emotion perception and the evolutionary origins and functions of facial expression. It includes an updated compilation on the great debate around Basic Emotion Theory versus Behavioral Ecology and Psychological constructionism. The developmental psychology and social psychology of facial expressions is explored in the role of facial expressions in child development, social interactions, and culture. The book also covers appraisal theory, concepts, neural and behavioral processes, and lesser-known facial behaviors such as yawing, vocal crying, and vomiting. In addition, the book reflects that research on the “expression of emotion” is moving towards a significance of context in the production and interpretation of facial expression The authors expose various fundamental questions and controversies yet to be resolved, but in doing so, open many sources of inspiration to pursue in the scientific study of facial expression.


Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


2018 ◽  
Author(s):  
Louisa Kulke ◽  
Dennis Feyerabend ◽  
Annekathrin Schacht

Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify emotions and its results are comparable to EMG findings.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2786 ◽  
Author(s):  
Ehsan Othman ◽  
Frerk Saxen ◽  
Dmitri Bershadskyy ◽  
Philipp Werner ◽  
Ayoub Al-Hamadi ◽  
...  

Experimental economic laboratories run many studies to test theoretical predictions with actual human behaviour, including public goods games. With this experiment, participants in a group have the option to invest money in a public account or to keep it. All the invested money is multiplied and then evenly distributed. This structure incentivizes free riding, resulting in contributions to the public goods declining over time. Face-to-face Communication (FFC) diminishes free riding and thus positively affects contribution behaviour, but the question of how has remained mostly unknown. In this paper, we investigate two communication channels, aiming to explain what promotes cooperation and discourages free riding. Firstly, the facial expressions of the group in the 3-minute FFC videos are automatically analysed to predict the group behaviour towards the end of the game. The proposed automatic facial expressions analysis approach uses a new group activity descriptor and utilises random forest classification. Secondly, the contents of FFC are investigated by categorising strategy-relevant topics and using meta-data. The results show that it is possible to predict whether the group will fully contribute to the end of the games based on facial expression data from three minutes of FFC, but deeper understanding requires a larger dataset. Facial expression analysis and content analysis found that FFC and talking until the very end had a significant, positive effect on the contributions.


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

There are many research works in visible as well as thermal facial expression analysis and recognition. Several facial expression databases have been designed in both modalities. However, little attention has been given for analyzing blended facial expressions in the thermal infrared spectrum. In this paper, we have introduced a Visual-Thermal Blended Facial Expression Database (VTBE) that contains visual and thermal face images with both basic and blended facial expressions. The database contains 12 posed blended facial expressions and spontaneous six basic facial expressions in both modalities. In this paper, we have proposed Deformed Thermal Facial Area (DTFA) in thermal expressive face image and make an analysis to differentiate between basic and blended expressions using DTFA. Here, the fusion of DTFA and Deformed Visual Facial Area (DVFA) has been proposed combining the features of both modalities and experiments and has been conducted on this new database. However, to show the effectiveness of our proposed approach, we have compared our method with state-of-the-art methods using USTC-NVIE database. Experiment results reveal that our approach is superior to state-of-the-art methods.


2007 ◽  
Vol 19 (3) ◽  
pp. 315-323 ◽  
Author(s):  
Ayako Watanabe ◽  
◽  
Masaki Ogino ◽  
Minoru Asada ◽  
◽  
...  

Sympathy is a key issue in interaction and communication between robots and their users. In developmental psychology, intuitive parenting is considered the maternal scaffolding upon which children develop sympathy when caregivers mimic or exaggerate the child’s emotional facial expressions [1]. We model human intuitive parenting using a robot that associates a caregiver’s mimicked or exaggerated facial expressions with the robot’s internal state to learn a sympathetic response. The internal state space and facial expressions are defined using psychological studies and change dynamically in response to external stimuli. After learning, the robot responds to the caregiver’s internal state by observing human facial expressions. The robot then expresses its own internal state facially if synchronization evokes a response to the caregiver’s internal state.


Sign in / Sign up

Export Citation Format

Share Document