Efficient convolutional neural network with multi-kernel enhancement features for real-time facial expression recognition

Author(s):  
Minze Li ◽  
Xiaoxia Li ◽  
Wei Sun ◽  
Xueyuan Wang ◽  
Shunli Wang
2021 ◽  
Vol 3 ◽  
Author(s):  
James Ren Lee ◽  
Linda Wang ◽  
Alexander Wong

While recent advances in deep learning have led to significant improvements in facial expression classification (FEC), a major challenge that remains a bottleneck for the widespread deployment of such systems is their high architectural and computational complexities. This is especially challenging given the operational requirements of various FEC applications, such as safety, marketing, learning, and assistive living, where real-time requirements on low-cost embedded devices is desired. Motivated by this need for a compact, low latency, yet accurate system capable of performing FEC in real-time on low-cost embedded devices, this study proposes EmotionNet Nano, an efficient deep convolutional neural network created through a human-machine collaborative design strategy, where human experience is combined with machine meticulousness and speed in order to craft a deep neural network design catered toward real-time embedded usage. To the best of the author’s knowledge, this is the very first deep neural network architecture for facial expression recognition leveraging machine-driven design exploration in its design process, and exhibits unique architectural characteristics such as high architectural heterogeneity and selective long-range connectivity not seen in previous FEC network architectures. Two different variants of EmotionNet Nano are presented, each with a different trade-off between architectural and computational complexity and accuracy. Experimental results using the CK + facial expression benchmark dataset demonstrate that the proposed EmotionNet Nano networks achieved accuracy comparable to state-of-the-art FEC networks, while requiring significantly fewer parameters. Furthermore, we demonstrate that the proposed EmotionNet Nano networks achieved real-time inference speeds (e.g., >25 FPS and >70 FPS at 15 and 30 W, respectively) and high energy efficiency (e.g., >1.7 images/sec/watt at 15 W) on an ARM embedded processor, thus further illustrating the efficacy of EmotionNet Nano for deployment on embedded devices.


JOUTICA ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 484
Author(s):  
Resty Wulanningrum ◽  
Anggi Nur Fadzila ◽  
Danar Putra Pamungkas

Manusia secara alami menggunakan ekspresi wajah untuk berkomunikasi dan menunjukan emosi mereka dalam berinteraksi sosial. Ekspresi wajah termasuk kedalam komunikasi non-verbal yang dapat menyampaikan keadaan emosi seseorang kepada orang yang telah mengamatinya. Penelitian ini menggunakan metode Principal Component Analysis (PCA) untuk proses ekstraksi ciri pada citra ekspresi dan metode Convolutional Neural Network (CNN) sebagai prosesi klasifikasi emosi, dengan menggunakan data Facial Expression Recognition-2013 (FER-2013) dilakukan proses training dan testing untuk menghasilkan nilai akurasi dan pengenalan emosi wajah. Hasil pengujian akhir mendapatkan nilai akurasi pada metode PCA sebesar 59,375% dan nilai akurasi pada pengujian metode CNN sebesar 59,386%.


Sign in / Sign up

Export Citation Format

Share Document