scholarly journals Deep Neural Network Identifies Dynamic Facial Action Units from Image Sequences

2018 ◽  
Vol 18 (10) ◽  
pp. 606
Author(s):  
Tian Xu ◽  
Oliver Garrod ◽  
Chaona Chen ◽  
Rachael Jack ◽  
Philippe Schyns
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 150743-150756
Author(s):  
Yong Zhang ◽  
Yanbo Fan ◽  
Weiming Dong ◽  
Bao-Gang Hu ◽  
Qiang Ji

2019 ◽  
Vol 8 (2S8) ◽  
pp. 1317-1323

The muscular activities caused the activation of facial action units (AUs) when a facial expression is shown by a human face. This paper presents the methods to recognize AU using a distance feature between facial points which activates the muscles. The seven AU involved are AU1, AU4, AU6, AU12, AU15, AU17 and AU25 that characterizes a happy and sad expression. The recognition is performed on each AU according to the rules defined based on the distance of each facial point. The facial distances chosen are computed from twelve salient facial points. Then the facial distances are trained using Support Vector Machine (SVM) and Neural Network (NN). Classification result using SVM is presented with several different SVM kernels while result using NN is presented for each training, validation and testing phase. By using any SVM kernels, it is consistent that AUs that are corresponded to sad expression has a high recognition compared to happy expression. The highest average kernel performance across AUs is 93%, scored by quadratic kernel. Best results for NN across AUs is for AU25 (Lips parted) with lowest CE (0.38%) and 0% incorrect classification.


Author(s):  
David T. Wang ◽  
Brady Williamson ◽  
Thomas Eluvathingal ◽  
Bruce Mahoney ◽  
Jennifer Scheler

Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document