scholarly journals Study of Impact of Computer Vision in Detecting Human Emotions

Author(s):  
Ravindra Kumar ◽  

Emotions play a powerful role in people's thinking and behaviors. Emotions act as a compulsion to take any action and can influence daily life decisions. Human facial expressions show humans share the same set of emotions. From the setting, the concept of emotion-sensing facial recognition was brought up. Humans have been working actively on computer vision algorithms, the algorithm will help determine the emotions of an individual and can determine the set of intentions accompanied by the emotions. The emotion-sensing facial expression computers are designed using data-centric skills in machine learning and can achieve their desired work by emotion identification and a set of intentions related to the emotion obtained.

2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.


2018 ◽  
Author(s):  
Nathaniel Haines ◽  
Matthew W. Southward ◽  
Jennifer S. Cheavens ◽  
Theodore Beauchaine ◽  
Woo-Young Ahn

AbstractFacial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.


2021 ◽  
Author(s):  
Harisu Abdullahi Shehu ◽  
William Browne ◽  
Hedwig Eisenbarth

Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.


Facial expressions convey verbal indications that play an important role in interpersonal relationships. Despite the fact that people immediately perceive facial expressions for all intents and purposes, solid expression recognition by machine is still a challenge. From the point of view of automatic recognition, The facial expression may included the figurations of the facial parts and their spatial relationships or changes in the pigmentation of the face. The study of automatic facial recognition addresses issues relating to the static or dynamic qualities of such distortion or facial pigmentation. Use The Camera to capture the live images of autism people


2021 ◽  
Vol 5 (12) ◽  
pp. 63-68
Author(s):  
Jun Mao

Classroom is an important environment for communication in teaching events. Therefore, both school and society should pay more attention to it. However, in the traditional teaching classroom, there is actually a relatively lack of communication and exchanges. Facial expression recognition is a branch of facial recognition technology with high precision. Even in large teaching scenes, it can capture the changes of students’ facial expressions and analyze their concentration accurately. This paper expounds the concept of this technology, and studies the evaluation of classroom teaching effects based on facial expression recognition.


Author(s):  
Ramadan TH. Hasan ◽  
◽  
Amira Bibo Sallow ◽  

Intel's OpenCV is a free and open-access image and video processing library. It is linked to computer vision, like feature and object recognition and machine learning. This paper presents the main OpenCV modules, features, and OpenCV based on Python. The paper also presents common OpenCV applications and classifiers used in these applications like image processing, face detection, face recognition, and object detection. Finally, we discuss some literary reviews of OpenCV applications in the fields of computer vision such as face detection and recognition, or recognition of facial expressions such as sadness, anger, happiness, or recognition of the gender and age of a person.


2021 ◽  
Vol 11 (15) ◽  
pp. 6827
Author(s):  
João Almeida ◽  
Luís Vilaça ◽  
Inês N. Teixeira ◽  
Paula Viana

Understanding how acting bridges the emotional bond between spectators and films is essential to depict how humans interact with this rapidly growing digital medium. In recent decades, the research community made promising progress in developing facial expression recognition (FER) methods. However, no emphasis has been put in cinematographic content, which is complex by nature due to the visual techniques used to convey the desired emotions. Our work represents a step towards emotion identification in cinema through facial expressions’ analysis. We presented a comprehensive overview of the most relevant datasets used for FER, highlighting problems caused by their heterogeneity and to the inexistence of a universal model of emotions. Built upon this understanding, we evaluated these datasets with a standard image classification models to analyze the feasibility of using facial expressions to determine the emotional charge of a film. To cope with the problem of lack of datasets for the scope under analysis, we demonstrated the feasibility of using a generic dataset for the training process and propose a new way to look at emotions by creating clusters of emotions based on the evidence obtained in the experiments.


10.29007/v16j ◽  
2019 ◽  
Author(s):  
Daichi Naito ◽  
Ryo Hatano ◽  
Hiroyuki Nishiyama

Careless driving is the most common cause of traffic accidents. Being in a drowsy state is a cause of careless driving, which can lead to a serious accident. Therefore, in this study, we focus on predicting drowsy driving. Studies on the prediction of drowsy driving focus on the prediction aspect only . However, users have various demands, like not wanting to wear a device while driving, and it is necessary to consider such demands when we introduce the prediction system. Hence, our purpose is to predict drowsy driving that can respond to a user’s demand(s) by combining two approaches of electroencephalogram (EEG ) and facial expressions. Our method is divided into three parts by type of data (facial expressions, EEG, and both), and the users can select the one suitable for their demands. We acquire data with a depth camera and an electroencephalograph and make a machine-learning model to predict drowsy driving. As a result, it is possible to correctly predict drowsy driving in the order of facial expression < EEG < and both combined. Our framework may be applicable to data other than EEG and facial expressions.


Author(s):  
Hyunwoong Ko ◽  
Kisun Kim ◽  
Minju Bae ◽  
Myo-Geong Seo ◽  
Gieun Nam ◽  
...  

The ability to express and recognize emotion via facial expressions is well known to change with age. The present study investigated the differences in the facial recognition and facial expression of the elderly (n = 57) and the young (n = 115) and measure how each group uses different facial muscles for each emotion with Facial Action Coding System (FACS). In facial recognition task, the elderly did not recognize facial expressions better than young people and reported stronger feelings of fear and sad from photographs. In making facial expression task, the elderly rated all their facial expressions as stronger than the younger, but in fact, they expressed strong expressions in fear and anger. Furthermore, the elderly used more muscles in the lower face when making facial expressions than younger people. These results help to understand better how the facial recognition and expression of the elderly change, and show that the elderly do not effectively execute the top-down processing concerning facial expression.


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Meiqi Zhuang ◽  
Lang Yin ◽  
Youhua Wang ◽  
Yunzhao Bai ◽  
Jian Zhan ◽  
...  

The facial expressions are a mirror of the elusive emotion hidden in the mind, and thus, capturing expressions is a crucial way of merging the inward world and virtual world. However, typical facial expression recognition (FER) systems are restricted by environments where faces must be clearly seen for computer vision, or rigid devices that are not suitable for the time-dynamic, curvilinear faces. Here, we present a robust, highly wearable FER system that is based on deep-learning-assisted, soft epidermal electronics. The epidermal electronics that can fully conform on faces enable high-fidelity biosignal acquisition without hindering spontaneous facial expressions, releasing the constraint of movement, space, and light. The deep learning method can significantly enhance the recognition accuracy of facial expression types and intensities based on a small sample. The proposed wearable FER system is superior for wide applicability and high accuracy. The FER system is suitable for the individual and shows essential robustness to different light, occlusion, and various face poses. It is totally different from but complementary to the computer vision technology that is merely suitable for simultaneous FER of multiple individuals in a specific place. This wearable FER system is successfully applied to human-avatar emotion interaction and verbal communication disambiguation in a real-life environment, enabling promising human-computer interaction applications.


Sign in / Sign up

Export Citation Format

Share Document