scholarly journals Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production

2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.

Author(s):  
Ravindra Kumar ◽  

Emotions play a powerful role in people's thinking and behaviors. Emotions act as a compulsion to take any action and can influence daily life decisions. Human facial expressions show humans share the same set of emotions. From the setting, the concept of emotion-sensing facial recognition was brought up. Humans have been working actively on computer vision algorithms, the algorithm will help determine the emotions of an individual and can determine the set of intentions accompanied by the emotions. The emotion-sensing facial expression computers are designed using data-centric skills in machine learning and can achieve their desired work by emotion identification and a set of intentions related to the emotion obtained.


2018 ◽  
Author(s):  
Nathaniel Haines ◽  
Matthew W. Southward ◽  
Jennifer S. Cheavens ◽  
Theodore Beauchaine ◽  
Woo-Young Ahn

AbstractFacial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.


2021 ◽  
Author(s):  
Harisu Abdullahi Shehu ◽  
William Browne ◽  
Hedwig Eisenbarth

Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.


Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


Author(s):  
Ramadan TH. Hasan ◽  
◽  
Amira Bibo Sallow ◽  

Intel's OpenCV is a free and open-access image and video processing library. It is linked to computer vision, like feature and object recognition and machine learning. This paper presents the main OpenCV modules, features, and OpenCV based on Python. The paper also presents common OpenCV applications and classifiers used in these applications like image processing, face detection, face recognition, and object detection. Finally, we discuss some literary reviews of OpenCV applications in the fields of computer vision such as face detection and recognition, or recognition of facial expressions such as sadness, anger, happiness, or recognition of the gender and age of a person.


10.29007/v16j ◽  
2019 ◽  
Author(s):  
Daichi Naito ◽  
Ryo Hatano ◽  
Hiroyuki Nishiyama

Careless driving is the most common cause of traffic accidents. Being in a drowsy state is a cause of careless driving, which can lead to a serious accident. Therefore, in this study, we focus on predicting drowsy driving. Studies on the prediction of drowsy driving focus on the prediction aspect only . However, users have various demands, like not wanting to wear a device while driving, and it is necessary to consider such demands when we introduce the prediction system. Hence, our purpose is to predict drowsy driving that can respond to a user’s demand(s) by combining two approaches of electroencephalogram (EEG ) and facial expressions. Our method is divided into three parts by type of data (facial expressions, EEG, and both), and the users can select the one suitable for their demands. We acquire data with a depth camera and an electroencephalograph and make a machine-learning model to predict drowsy driving. As a result, it is possible to correctly predict drowsy driving in the order of facial expression < EEG < and both combined. Our framework may be applicable to data other than EEG and facial expressions.


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Meiqi Zhuang ◽  
Lang Yin ◽  
Youhua Wang ◽  
Yunzhao Bai ◽  
Jian Zhan ◽  
...  

The facial expressions are a mirror of the elusive emotion hidden in the mind, and thus, capturing expressions is a crucial way of merging the inward world and virtual world. However, typical facial expression recognition (FER) systems are restricted by environments where faces must be clearly seen for computer vision, or rigid devices that are not suitable for the time-dynamic, curvilinear faces. Here, we present a robust, highly wearable FER system that is based on deep-learning-assisted, soft epidermal electronics. The epidermal electronics that can fully conform on faces enable high-fidelity biosignal acquisition without hindering spontaneous facial expressions, releasing the constraint of movement, space, and light. The deep learning method can significantly enhance the recognition accuracy of facial expression types and intensities based on a small sample. The proposed wearable FER system is superior for wide applicability and high accuracy. The FER system is suitable for the individual and shows essential robustness to different light, occlusion, and various face poses. It is totally different from but complementary to the computer vision technology that is merely suitable for simultaneous FER of multiple individuals in a specific place. This wearable FER system is successfully applied to human-avatar emotion interaction and verbal communication disambiguation in a real-life environment, enabling promising human-computer interaction applications.


2021 ◽  
Author(s):  
Harisu Abdullahi Shehu ◽  
William Browne ◽  
Hedwig Eisenbarth

Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.


2017 ◽  
Vol 2 (2) ◽  
pp. 130-134
Author(s):  
Jarot Dwi Prasetyo ◽  
Zaehol Fatah ◽  
Taufik Saleh

In recent years it appears interest in the interaction between humans and computers. Facial expressions play a fundamental role in social interaction with other humans. In two human communications is only 7% of communication due to language linguistic message, 38% due to paralanguage, while 55% through facial expressions. Therefore, to facilitate human machine interface more friendly on multimedia products, the facial expression recognition on interface very helpful in interacting comfort. One of the steps that affect the facial expression recognition is the accuracy in facial feature extraction. Several approaches to facial expression recognition in its extraction does not consider the dimensions of the data as input features of machine learning Through this research proposes a wavelet algorithm used to reduce the dimension of data features. Data features are then classified using SVM-multiclass machine learning to determine the difference of six facial expressions are anger, hatred, fear of happy, sad, and surprised Jaffe found in the database. Generating classification obtained 81.42% of the 208 sample data.


2021 ◽  
Vol 03 (02) ◽  
pp. 204-208
Author(s):  
Ielaf O. Abdul-Majjed DAHL

In the past decade, the field of facial expression recognition has attracted the attention of scientists who play an important role in enhancing interaction between human and computers. The issue of facial expression recognition is not a simple matter of machine learning, because expression of the individual differs from one person to another based on the various contexts, backgrounds and lighting. The goal of the current system was to achieve the highest rate for two facial expressions ("happy" and "sad") The objective of the current work was to attain the highest rate in classification with computer vision algorithms for two facial expressions ("happy" and "sad"). This was accomplished through several phases started from image pre-processing to the Gabor filter extraction, which was then used for the extraction of important characteristics with mutual information. The expression was finally recognized by a support vector classifier. Cohn-Kanade database and JAFFE data base have been trained and checked. The rates achieved by the qualified data package were 81.09% and 92.85% respectively.


Sign in / Sign up

Export Citation Format

Share Document