Computerized Facial Emotion Expression Recognition

Author(s):  
Mattis Geiger ◽  
Oliver Wilhelm
2018 ◽  
Vol 14 (1) ◽  
pp. 81-95 ◽  
Author(s):  
Indrit Bègue ◽  
Maarten Vaessen ◽  
Jeremy Hofmeister ◽  
Marice Pereira ◽  
Sophie Schwartz ◽  
...  

2021 ◽  
pp. 104-117
Author(s):  
Mari Fitzduff

This chapter looks at the importance of understanding the many cultural differences that exist between different groups and in different contexts around the world. Without a sensitivity to such differences, wars can be lost and positive influences minimized. These differences include the existence of high-context versus low-context societies, differing hierarchical approaches to power and authority, collectivist versus individualist societies, differing emotion expression/recognition, gender differences, differing evidencing of empathy, face preferences, and communication styles. Lack of cultural attunement to these issues can exacerbate misunderstandings and conflicts, unless understood and factored into difficult strategies and dialogues.


2020 ◽  
Vol 28 (1) ◽  
pp. 97-111
Author(s):  
Nadir Kamel Benamara ◽  
Mikel Val-Calvo ◽  
Jose Ramón Álvarez-Sánchez ◽  
Alejandro Díaz-Morcillo ◽  
Jose Manuel Ferrández-Vicente ◽  
...  

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.


2013 ◽  
Vol 46 (4) ◽  
pp. 992-1006 ◽  
Author(s):  
Sally Olderbak ◽  
Andrea Hildebrandt ◽  
Thomas Pinkpank ◽  
Werner Sommer ◽  
Oliver Wilhelm

Author(s):  
Simona Prosen ◽  
Vesna Geršak ◽  
Helena Smrtnik Vitulić

The study focuses on students emotion expression during geometry teaching including creative movement (experimental group or EG) and without it (control group or CG). The sample (N = 104) was made up of primary school (second-grade) students: 66 were assigned to the EG and 38 to the CG. Of these, 12 students from the EG and 8 from the CG were randomly selected for observation of emotion: type, intensity, triggering situation, and response of others. For the observed students, the intensity of emotion expression was also measured by the facial expression recognition software FaceReader. All of the students self-assessed their contentedness with the teaching. The students in the EG and the CG expressed various emotions, with joy being the most prevalent, followed by anger. The most frequent situations triggering joy were activities in the EG and the CG. The intensity of joy was higher in the EG than in the CG when assessed by observation, but there was no significant difference when assessed by FaceReader. The intensity of anger expression was at a similar level in both groups. Both students and teachers responded to students joy expression, but only the students responded to anger expression in the EG and the CG. The students in both groups expressed a high level of contentedness with the teaching. Key words: creative movement; emotion expression; intensity of emotions; students; teaching method.


Sign in / Sign up

Export Citation Format

Share Document