scholarly journals Brain-Inspired Deep Networks for Facial Expression Recognition

Author(s):  
Nafiseh Zeinali ◽  
Karim Faez ◽  
Sahar Seifzadeh

Purpose: One of the essential problems in deep-learning face recognition research is the use of self-made and less counted data sets, which forces the researcher to work on duplicate and provided data sets. In this research, we try to resolve this problem and get to high accuracy. Materials and Methods: In the current study, the goal is to identify individual facial expressions in the image or sequence of images that include identifying ten facial expressions. Considering the increasing use of deep learning in recent years, in this study, using the convolution networks and, most importantly, using the concept of transfer learning, led us to use pre-trained networks to train our networks. Results: One way to improve accuracy in working with less counted data and deep-learning is to use pre-trained using pre-trained networks. Due to the small number of data sets, we used the techniques for data augmentation and eventually tripled the data size. These techniques include: rotating 10 degrees to the left and right and eventually turning to elastic transmation. We also applied deep Res-Net's network to public data sets existing for face expression by data augmentation. Conclusion: We saw a seven percent increase in accuracy compared to the highest accuracy in previous work on the considering dataset.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2166
Author(s):  
Geesung Oh ◽  
Junghwan Ryu ◽  
Euiseok Jeong ◽  
Ji Hyun Yang ◽  
Sungwook Hwang ◽  
...  

In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.


Author(s):  
Sharmeen M. Saleem Abdullah ◽  
◽  
Adnan Mohsin Abdulazeez ◽  

Facial emotional processing is one of the most important activities in effective calculations, engagement with people and computers, machine vision, video game testing, and consumer research. Facial expressions are a form of nonverbal communication, as they reveal a person's inner feelings and emotions. Extensive attention to Facial Expression Recognition (FER) has recently been received as facial expressions are considered. As the fastest communication medium of any kind of information. Facial expression recognition gives a better understanding of a person's thoughts or views and analyzes them with the currently trending deep learning methods. Accuracy rate sharply compared to traditional state-of-the-art systems. This article provides a brief overview of the different FER fields of application and publicly accessible databases used in FER and studies the latest and current reviews in FER using Convolution Neural Network (CNN) algorithms. Finally, it is observed that everyone reached good results, especially in terms of accuracy, with different rates, and using different data sets, which impacts the results.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4047-4051

The automatic detection of facial expressions is an active research topic, since its wide fields of applications in human-computer interaction, games, security or education. However, the latest studies have been made in controlled laboratory environments, which is not according to real world scenarios. For that reason, a real time Facial Expression Recognition System (FERS) is proposed in this paper, in which a deep learning approach is applied to enhance the detection of six basic emotions: happiness, sadness, anger, disgust, fear and surprise in a real-time video streaming. This system is composed of three main components: face detection, face preparation and face expression classification. The results of proposed FERS achieve a 65% of accuracy, trained over 35558 face images..


2020 ◽  
Vol 496 (3) ◽  
pp. 3553-3571
Author(s):  
Benjamin E Stahl ◽  
Jorge Martínez-Palomera ◽  
WeiKang Zheng ◽  
Thomas de Jaeger ◽  
Alexei V Filippenko ◽  
...  

ABSTRACT We present deepSIP (deep learning of Supernova Ia Parameters), a software package for measuring the phase and – for the first time using deep learning – the light-curve shape of a Type Ia supernova (SN Ia) from an optical spectrum. At its core, deepSIP consists of three convolutional neural networks trained on a substantial fraction of all publicly available low-redshift SN Ia optical spectra, on to which we have carefully coupled photometrically derived quantities. We describe the accumulation of our spectroscopic and photometric data sets, the cuts taken to ensure quality, and our standardized technique for fitting light curves. These considerations yield a compilation of 2754 spectra with photometrically characterized phases and light-curve shapes. Though such a sample is significant in the SN community, it is small by deep-learning standards where networks routinely have millions or even billions of free parameters. We therefore introduce a data-augmentation strategy that meaningfully increases the size of the subset we allocate for training while prioritizing model robustness and telescope agnosticism. We demonstrate the effectiveness of our models by deploying them on a sample unseen during training and hyperparameter selection, finding that Model I identifies spectra that have a phase between −10 and 18 d and light-curve shape, parametrized by Δm15, between 0.85 and 1.55 mag with an accuracy of 94.6 per cent. For those spectra that do fall within the aforementioned region in phase–Δm15 space, Model II predicts phases with a root-mean-square error (RMSE) of 1.00 d and Model III predicts Δm15 values with an RMSE of 0.068 mag.


Author(s):  
Kottilingam Kottursamy

The role of facial expression recognition in social science and human-computer interaction has received a lot of attention. Deep learning advancements have resulted in advances in this field, which go beyond human-level accuracy. This article discusses various common deep learning algorithms for emotion recognition, all while utilising the eXnet library for achieving improved accuracy. Memory and computation, on the other hand, have yet to be overcome. Overfitting is an issue with large models. One solution to this challenge is to reduce the generalization error. We employ a novel Convolutional Neural Network (CNN) named eXnet to construct a new CNN model utilising parallel feature extraction. The most recent eXnet (Expression Net) model improves on the previous model's inaccuracy while having many fewer parameters. Data augmentation techniques that have been in use for decades are being utilized with the generalized eXnet. It employs effective ways to reduce overfitting while maintaining overall size under control.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 1076-1079

Automated facial expression recognition can greatly improve the human–machine interface. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. In this research, we enhanced Convolutional Neural Network method to recognize 6 basic emotions and compared some pre processing methods to show the influences of its in CNN performance. The preprocessing methods are :resizing, mean, normalization, standard deviation, scaling and edge detection . Face detection as single pre-processing phase achieved significant result with 100 % of accuracy, compared with another pre-processing phase and raw data.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


Sign in / Sign up

Export Citation Format

Share Document