Invisible emotion magnification algorithm (IEMA) for real-time micro-expression recognition with graph-based features

Author(s):  
Adamu Muhammad Buhari ◽  
Chee-Pun Ooi ◽  
Vishnu Monn Baskaran ◽  
Raphael CW Phan ◽  
KokSheik Wong ◽  
...  
2020 ◽  
Vol 10 (14) ◽  
pp. 4959
Author(s):  
Reda Belaiche ◽  
Yu Liu ◽  
Cyrille Migniot ◽  
Dominique Ginhac ◽  
Fan Yang

Micro-Expression (ME) recognition is a hot topic in computer vision as it presents a gateway to capture and understand daily human emotions. It is nonetheless a challenging problem due to ME typically being transient (lasting less than 200 ms) and subtle. Recent advances in machine learning enable new and effective methods to be adopted for solving diverse computer vision tasks. In particular, the use of deep learning techniques on large datasets outperforms classical approaches based on classical machine learning which rely on hand-crafted features. Even though available datasets for spontaneous ME are scarce and much smaller, using off-the-shelf Convolutional Neural Networks (CNNs) still demonstrates satisfactory classification results. However, these networks are intense in terms of memory consumption and computational resources. This poses great challenges when deploying CNN-based solutions in many applications, such as driver monitoring and comprehension recognition in virtual classrooms, which demand fast and accurate recognition. As these networks were initially designed for tasks of different domains, they are over-parameterized and need to be optimized for ME recognition. In this paper, we propose a new network based on the well-known ResNet18 which we optimized for ME classification in two ways. Firstly, we reduced the depth of the network by removing residual layers. Secondly, we introduced a more compact representation of optical flow used as input to the network. We present extensive experiments and demonstrate that the proposed network obtains accuracies comparable to the state-of-the-art methods while significantly reducing the necessary memory space. Our best classification accuracy was 60.17% on the challenging composite dataset containing five objectives classes. Our method takes only 24.6 ms for classifying a ME video clip (less than the occurrence time of the shortest ME which lasts 40 ms). Our CNN design is suitable for real-time embedded applications with limited memory and computing resources.


2020 ◽  
Vol 6 (12) ◽  
pp. 130
Author(s):  
Adamu Muhammad Buhari ◽  
Chee-Pun Ooi ◽  
Vishnu Monn Baskaran ◽  
Raphaël C. W. Phan ◽  
KokSheik Wong ◽  
...  

Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1029
Author(s):  
Adamu Muhammad Buhari ◽  
Chee-Pun Ooi ◽  
Vishnu Monn Baskaran ◽  
Wooi-Haw Tan

The trend of real-time micro-expression recognition systems has increased with recent advancements in human-computer interaction (HCI) in security and healthcare. Several studies in this field contributed towards recognition accuracy, while few studies look into addressing the computation costs. In this paper, two approaches for micro-expression feature extraction are analyzed for real-time automatic micro-expression recognition. Firstly, motion-based approach, which calculates motion of subtle changes from an image sequence and present as features. Then, secondly, a low computational geometric-based feature extraction technique, a very popular method for facial expression recognition in real-time. These approaches were integrated in a developed system together with a facial landmark detection algorithm and a classifier for real-time analysis. Moreover, the recognition performance were evaluated using SMIC, CASME, CAS(ME)2 and SAMM datasets. The results suggest that the optimized Bi-WOOF (leveraging on motion-based features) yields the highest accuracy of 68.5%, while the full-face graph (leveraging on geometric-based features) yields 75.53% on the SAMM dataset. On the other hand, the optimized Bi-WOOF processes sample at 0.36 seconds and full-face graph processes sample at 0.10 seconds with a 640x480 image size. All experiments were performed on an Intel i5-3470 machine.


Sign in / Sign up

Export Citation Format

Share Document