scholarly journals Vision-Based Road Rage Detection Framework in Automotive Safety Applications

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2942
Author(s):  
Alessandro Leone ◽  
Andrea Caroppo ◽  
Andrea Manni ◽  
Pietro Siciliano

Drivers’ road rage is among the main causes of road accidents. Each year, it contributes to more deaths and injuries globally. In this context, it is important to implement systems that can supervise drivers by monitoring their level of concentration during the entire driving process. In this paper, a module for Advanced Driver Assistance System is used to minimise the accidents caused by road rage, alerting the driver when a predetermined level of rage is reached, thus increasing the transportation safety. To create a system that is independent of both the orientation of the driver’s face and the lighting conditions of the cabin, the proposed algorithmic pipeline integrates face detection and facial expression classification algorithms capable of handling such non-ideal situations. Moreover, road rage of the driver is estimated through a decision-making strategy based on the temporal consistency of facial expressions classified as “anger” and “disgust”. Several experiments were executed to assess the performance on both a real context and three standard benchmark datasets, two of which containing non-frontal-view facial expression and one which includes facial expression recorded from participants during driving. Results obtained show that the proposed module is competent for road rage estimation through facial expression recognition on the condition of multi-pose and changing in lighting conditions, with the recognition rates that achieve state-of-art results on the selected datasets.

2019 ◽  
Vol 8 (4) ◽  
pp. 3570-3574

The facial expression recognition system is playing vital role in many organizations, institutes, shopping malls to know about their stakeholders’ need and mind set. It comes under the broad category of computer vision. Facial expression can easily explain the true intention of a person without any kind of conversation. The main objective of this work is to improve the performance of facial expression recognition in the benchmark datasets like CK+, JAFFE. In order to achieve the needed accuracy metrics, the convolution neural network was constructed to extract the facial expression features automatically and combined with the handcrafted features extracted using Histogram of Gradients (HoG) and Local Binary Pattern (LBP) methods. Linear Support Vector Machine (SVM) is built to predict the emotions using the combined features. The proposed method produces promising results as compared to the recent work in [1].This is mainly needed in the working environment, shopping malls and other public places to effectively understand the likeliness of the stakeholders at that moment.


Author(s):  
Zakia Hammal ◽  
Zakia Hammal

This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday life situations. The authors also introduce the importance of taking advantage of the human strategy by reviewing advances of research in psychology towards multidisciplinary approach for facial expression classification. Finally, the authors describe one contribution which aims at dealing with some of the discussed challenges.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1087
Author(s):  
Muhammad Naveed Riaz ◽  
Yao Shen ◽  
Muhammad Sohail ◽  
Minyi Guo

Facial expression recognition has been well studied for its great importance in the areas of human–computer interaction and social sciences. With the evolution of deep learning, there have been significant advances in this area that also surpass human-level accuracy. Although these methods have achieved good accuracy, they are still suffering from two constraints (high computational power and memory), which are incredibly critical for small hardware-constrained devices. To alleviate this issue, we propose a new Convolutional Neural Network (CNN) architecture eXnet (Expression Net) based on parallel feature extraction which surpasses current methods in accuracy and contains a much smaller number of parameters (eXnet: 4.57 million, VGG19: 14.72 million), making it more efficient and lightweight for real-time systems. Several modern data augmentation techniques are applied for generalization of eXnet; these techniques improve the accuracy of the network by overcoming the problem of overfitting while containing the same size. We provide an extensive evaluation of our network against key methods on Facial Expression Recognition 2013 (FER-2013), Extended Cohn-Kanade Dataset (CK+), and Real-world Affective Faces Database (RAF-DB) benchmark datasets. We also perform ablation evaluation to show the importance of different components of our architecture. To evaluate the efficiency of eXnet on embedded systems, we deploy it on Raspberry Pi 4B. All these evaluations show the superiority of eXnet for emotion recognition in the wild in terms of accuracy, the number of parameters, and size on disk.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Anwar Saeed ◽  
Ayoub Al-Hamadi ◽  
Robert Niese ◽  
Moftah Elzobi

To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.


2020 ◽  
Vol 11 (1) ◽  
pp. 48-70 ◽  
Author(s):  
Sivaiah Bellamkonda ◽  
Gopalan N.P

Facial expression analysis and recognition has gained popularity in the last few years for its challenging nature and broad area of applications like HCI, pain detection, operator fatigue detection, surveillance, etc. The key of real-time FER system is exploiting its variety of features extracted from the source image. In this article, three different features viz. local binary pattern, Gabor, and local directionality pattern were exploited to perform feature fusion and two classification algorithms viz. support vector machines and artificial neural networks were used to validate the proposed model on benchmark datasets. The classification accuracy has been improved in the proposed feature fusion of Gabor and LDP features with SVM classifier, recorded an average accuracy of 93.83% on JAFFE, 95.83% on CK and 96.50% on MMI. The recognition rates were compared with the existing studies in the literature and found that the proposed feature fusion model has improved the performance.


2011 ◽  
Vol 121-126 ◽  
pp. 617-621 ◽  
Author(s):  
Chang Yi Kao ◽  
Chin Shyurng Fahn

During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.


2020 ◽  
Vol 37 (4) ◽  
pp. 627-632
Author(s):  
Aihua Li ◽  
Lei An ◽  
Zihui Che

With the development of computer vision, facial expression recognition has become a research hotspot. To further improve the accuracy of facial expression recognition, this paper probes deep into image segmentation, feature extraction, and facial expression classification. Firstly, the convolution neural network (CNN) was adopted to accurately separate the salient regions from the face image. Next, the Gaussian Markov random field (GMRF) model was improved to enhance the ability of texture features to represent image information, and a novel feature extraction algorithm called specific angle abundance entropy (SAAE) was designed to improve the representation ability of shape features. After that, the texture features were combined with shape features, and trained and classified by the support vector machine (SVM) classifier. Finally, the proposed method was compared with common methods of facial expression recognition on a standard facial expression database. The results show that our method can greatly improve the accuracy of facial expression recognition.


Author(s):  
Laszlo A. Jeni ◽  
◽  
Hideki Hashimoto ◽  
Takashi Kubota ◽  

In human-human communication we use verbal, vocal and non-verbal signals to communicate with others. Facial expressions are a form of non-verbal communication, recognizing them helps to improve the human-machine interaction. This paper proposes a system for pose- and illumination-invariant recognition of facial expressions using near-infrared camera images and precise 3D shape registration. Precise 3D shape information of the human face can be computed by means of Constrained Local Models (CLM), which fits a dense model to an unseen image in an iterative manner. We used a multi-class SVM to classify the acquired 3D shape into different emotion categories. Results surpassed human performance and show poseinvariant performance. Varying lighting conditions can influence the fitting process and reduce the recognition precision. We built a near-infrared and visible light camera array to test the method with different illuminations. Results shows that the near-infrared camera configuration is suitable for robust and reliable facial expression recognition with changing lighting conditions.


2020 ◽  
Vol 1 (6) ◽  
Author(s):  
Pablo Barros ◽  
Nikhil Churamani ◽  
Alessandra Sciutti

AbstractCurrent state-of-the-art models for automatic facial expression recognition (FER) are based on very deep neural networks that are effective but rather expensive to train. Given the dynamic conditions of FER, this characteristic hinders such models of been used as a general affect recognition. In this paper, we address this problem by formalizing the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks. We introduce an inhibitory layer that helps to shape the learning of facial features in the last layer of the network and, thus, improving performance while reducing the number of trainable parameters. To evaluate our model, we perform a series of experiments on different benchmark datasets and demonstrate how the FaceChannel achieves a comparable, if not better, performance to the current state-of-the-art in FER. Our experiments include cross-dataset analysis, to estimate how our model behaves on different affective recognition conditions. We conclude our paper with an analysis of how FaceChannel learns and adapts the learned facial features towards the different datasets.


2013 ◽  
pp. 1508-1531
Author(s):  
Zakia Hammal

This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday life situations. The authors also introduce the importance of taking advantage of the human strategy by reviewing advances of research in psychology towards multidisciplinary approach for facial expression classification. Finally, the authors describe one contribution which aims at dealing with some of the discussed challenges.


Sign in / Sign up

Export Citation Format

Share Document