Local Bilinear Convolutional Neural Network for Spotting Macro- and Micro-expression Intervals in Long Video Sequences

Author(s):  
Hang Pan ◽  
Lun Xie ◽  
Zhiliang Wang
2018 ◽  
Vol 22 (4) ◽  
pp. 1331-1339 ◽  
Author(s):  
Jing Li ◽  
Yandan Wang ◽  
John See ◽  
Wenbin Liu

Author(s):  
Akshay Divkar ◽  
Rushikesh Bailkar ◽  
Dr. Chhaya S. Pawar

Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by hearing & speech impaired people who have hearing or speech problems to communicate among themselves or with normal people. Developing sign language applications for hearing impaired people can be very important, as hearing & speech impaired people will be able to communicate easily with even those who don’t understand sign language. This project aims at taking the basic step in bridging the communication gap between normal people, deaf and dumb people using sign language. The main focus of this work is to create a vision based system to identify sign language gestures from the video sequences. The reason for choosing a system based on vision relates to the fact that it provides a simpler and more intuitive way of communication between a human and a computer. Video sequences contain both temporal as well as spatial features. In this project, two different models are used to train the temporal as well as spatial features. To train the model on the spatial features of the video sequences a deep Convolutional Neural Network. Convolutional Neural Network was trained on the frames obtained from the video sequences of train data. To train the model on the temporal features Recurrent Neural Network is used. The Trained Convolutional Neural Network model was used to make predictions for individual frames to obtain a sequence of predictions. Now this sequence of prediction outputs was given to the Recurrent Neural Network to train on the temporal features. Collectively both the trained models i.e. Convolutional Neural Network and Recurrent Neural Network will produce the text output of the respective gesture.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 184537-184551 ◽  
Author(s):  
Baolin Song ◽  
Ke Li ◽  
Yuan Zong ◽  
Jie Zhu ◽  
Wenming Zheng ◽  
...  

Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 380
Author(s):  
Boyu Chen ◽  
Zhihao Zhang ◽  
Nian Liu ◽  
Yang Tan ◽  
Xinyu Liu ◽  
...  

A micro-expression is defined as an uncontrollable muscular movement shown on the face of humans when one is trying to conceal or repress his true emotions. Many researchers have applied the deep learning framework to micro-expression recognition in recent years. However, few have introduced the human visual attention mechanism to micro-expression recognition. In this study, we propose a three-dimensional (3D) spatiotemporal convolutional neural network with the convolutional block attention module (CBAM) for micro-expression recognition. First image sequences were input to a medium-sized convolutional neural network (CNN) to extract visual features. Afterwards, it learned to allocate the feature weights in an adaptive manner with the help of a convolutional block attention module. The method was testified in spontaneous micro-expression databases (Chinese Academy of Sciences Micro-expression II (CASME II), Spontaneous Micro-expression Database (SMIC)). The experimental results show that the 3D CNN with convolutional block attention module outperformed other algorithms in micro-expression recognition.


Sign in / Sign up

Export Citation Format

Share Document