face expression
Recently Published Documents


TOTAL DOCUMENTS

214
(FIVE YEARS 91)

H-INDEX

14
(FIVE YEARS 2)

Children ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 1108
Author(s):  
Koviljka Barisnikov ◽  
Marine Thomasson ◽  
Jennyfer Stutzmann ◽  
Fleur Lejeune

This study assessed two components of face emotion processing: emotion recognition and sensitivity to intensity of emotion expressions and their relation in children age 4 to 12 (N = 216). Results indicated a slower development in the accurate decoding of low intensity expressions compared to high intensity. Between age 4 and 12, children discriminated high intensity expressions better than low ones. The intensity of expression had a stronger impact on overall face expression recognition. High intensity happiness was better recognized than low intensity up to age 11, while children 4 to 12 had difficulties discriminating between high and low intensity sadness. Our results suggest that sensitivity to low intensity expressions acts as a complementary mediator between age and emotion expression recognition, while this was not the case for the recognition of high intensity expressions. These results could help in the development of specific interventions for populations presenting socio-cognitive and emotion difficulties.


Author(s):  
Lohith Raj S N

Abstract: The LBPH algorithm is used ubiquitously for Face Recognition applications in modern times because of its simplicity of implementation, despite providing high accuracy and less computation time. However, in conditions of varied illumination, face expression and angles at which face images are captured, its confidence is decreased. We propose a slightly modified algorithm that considers the median of the neighbourhood pixels rather than the pixel itself to overcome this issue. This algorithm is called Median-LBPH. The grey value of every pixel is replaced by the median of all the neighbourhood pixel values. Then the features are extracted, and a histogram representing the original image is saved in the model. This model, in turn, can be used to compare with histograms obtained from the faces in real-time footage to find a potential match. This algorithm is used in an end-to-end face recognition system, a web application prototype for Law Enforcement Agencies to maintain a central criminal database shared and accessed across various departments. A live surveillance system is added as part of this novel application so that whenever an already registered criminal appears live on surveillance cameras, a notification will be received, and personnel appropriate Law Enforcement authorities will receive e-mail and text messages through a secured channel. Keywords: Face Recognition, Median-Local Binary Pattern Histogram (MLBPH), Haar Cascade, Adaboost, Neighbourhood Median


2021 ◽  
Author(s):  
Enrico Randellini ◽  
Leonardo Rigutini ◽  
Claudio Saccà

The face expression is the first thing we pay attention to when we want to understand a person’s state of mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research field. In this paper, because the small size of available training datasets, we propose a novel data augmentation technique that improves the performances in the recognition task. We apply geometrical transformations and build from scratch GAN models able to generate new synthetic images for each emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with different architectures. To measure the generalization ability of the models, we apply extra-database protocol approach, namely we train models on the augmented versions of training dataset and test them on two different databases. The combination of these techniques allows to reach average accuracy values of the order of 85% for the InceptionResNetV2 model.


2021 ◽  
Vol 11 (69) ◽  
pp. 8214-8225
Author(s):  
AURICÉLIA DIAS SANTOS ◽  
AURELIO DIAS SANTOS ◽  
ROSANGELA FROTA RIBEIRO DE VASCONCELOS ◽  
ANA BEATRIZ BEZERRA

Objetivo: Identificar, através de um levantamento bibliográfico, os efeitos da radiofrequência no rejuvenescimento facial. Métodos: Estudo de revisão integrativa, realizado no período de setembro a outubro de 2020. Pesquisados nos bancos de dados: LILACS, SciELO, Periódicos CAPES e PubMed. Os descritores utilizados e relacionados, com o operador booleno "and'; em português e inglês foram: radiofrequência, expressão da face, pele Rejuvenescimento; Fisioterapia e Radio frequency; Face expression; Skin; Rejuvenation; Physiotherapy. Foram realizadas buscas publicações dos últimos 10 anos. Resultados: Após a leitura minuciosa dos artigos elegíveis, foram selecionados três artigos. Os autores relatam que os efeitos da radiofrequência são significativos na redução da extensão e profundidade das rugas e na melhora da aparência da flacidez pele, além de melhora perceptível da expressão da face. Conclusões: Os efeitos da radiofrequência delinearam-se de maneira bastante evidente, principalmente no que diz respeito a melhora do tônus da pele e da diminuição das rugas.


2021 ◽  
Vol 2062 (1) ◽  
pp. 012018
Author(s):  
C Selvi ◽  
Y Anvitha ◽  
C H Asritha ◽  
P B Sayannah

Abstract To develop a Deep Learning algorithm that detects the Kathakali face expression (or Navarasas) from a given image of a person who performs Kathakali. One of India’s major classical dance forms is Kathakali. It is a “story play” genre of art, but one distinguished by the traditional male-actor-dancers costumes, face masks and makeup they wear. In the Southern region of India, Kathakali is a Hindu performance art in Malayalam speaking. Most of the plays are epic scenes of Mahabharata and Ramayana. A lot of foreigners visiting India are inspired by this art form and have been curious about the culture. It is still used for entertainment as a part of tourism and temple rituals. An understanding of facial expressions are essential so as to enjoy the play. The scope of the paper is to identify the facial expressions of Kathakali to have a better understanding of the art play. In this paper, Machine Learning and Image Processing techniques are used to decode the expressions. Kathakali face expressions are nine types namely-Adbhutam (wonder), Hasyam (comic), Sringaram(love), Bheebatsam(repulsion), Bhayanakam(fear), Roudram(anger), Veeram(pride), Karunam(sympathy) and Shantham (peace). These Expressions are mapped to real world human emotions for better classification through face detection and extraction to achieve the same. Similarly a lot of research in terms of Preprocessing and Classification is done to achieve the maximum accuracy. Using CNN algorithm 90% of the accuracy was achieved. In order to conserve the pixel distribution and as no preprocessing was used for better object recognition and analysis Fuzzy algorithm is taken into consideration. Using this preprocessing technique 93% accuracy was achieved.


2021 ◽  
Vol 4 (1) ◽  
pp. 74-80
Author(s):  
Murat Silahtaroğlu ◽  
Serkan Dereli

Drowsiness is one of the underlying causes of driving accidents leading to serious injuries and deaths annually. According to the experts has mentioned that almost 30% of all traffic accidents have been caused by drowsiness. In avoiding these traffic accidents, a proper system is required to prevent the driver from falling asleep. This study proposes a real-time image processing-based system for recognizing the drowsiness face expression of the vehicle driver. The method of this study, detecting the exact position facial landmarks and both left and right eyes using dlib and eye aspect ratio algorithm. This system, after detecting drowsiness eye, give audible alert the vehicle driver to stay awake throughout the driving journey.


2021 ◽  
pp. 1-13
Author(s):  
Guojiang Han ◽  
Caikou Chen ◽  
Zhixuan Xu ◽  
Shengwei Zhou

Ensemble learning using a set of deep convolutional neural networks (DCNNs) as weak classifiers has become a powerful tool for face expression. Nevertheless, training a DCNNS-based ensemble is not only time consuming but also gives rise to high redundancy due to the nature of DCNNs. In this paper, a novel DCNNs-based ensemble method, named weighted ensemble with angular feature learning (WDEA), is proposed to improve the computational efficiency and diversity of the ensemble. Specifically, the proposed ensemble consists of four parts including input layer, trunk layers, diversity layers and loss fusion. Among them, the trunk layers which are used to extract the local features of face images are shared by diversity layers such that the lower-level redundancy can be largely reduced. The independent branches enable the diversity of the ensemble. Rather than the traditional softmax loss, the angular softmax loss is employed to extract more discriminant deep feature representation. Moreover, a novel weighting technique is proposed to enhance the diversity of the ensemble. Extensive experiments were performed on CK+ and AffectNet. Experimental results demonstrate that the proposed WDEA outperforms existing ensemble learning methods on the recogntion rate and computational efficiency.


Author(s):  
Sneha K Lalitha ◽  
J Aishwarya ◽  
Neha Shivakumar ◽  
Tangudu Srilekha ◽  
G C R Kartheek

Author(s):  
Anand Mohan

This system is designed developed for detection of the stress index of a person on the basis of emotion recognition and analysis of a face. It's a simple application which use the front camera of the smartphone or computer and does not have requirement o f any other external hardware. It has been developed with the major focus on students young generation and somewhat less on the adults because of the fact that the young generation is more prone to over use of the smart devices. The methodology used is e fficient and simple as this application running in background takes pictures of the user at various intervals as defined by the timing graph. Such images are converted into compatible images and stored in the database whose URL would be fetched in return a fter a successful operation. A timing graph is a resultant of a function over time which determines the initiation of the consecutive photo captures of the user in a series. It gets increased over time as the condition of stress is likely to happen as the duration of usage content increases. Major 7 emotions with which the face can express are Happy, Sad, Angry, Disgust, Neutral, Fear, Surprise whose analysis is found by Microsoft azure emotion API These Expressions could be formulated in a probabilist ic manner with priority weightage mentioned in the weight table (table no. 1) assignment to each emotion fetched. The returned emotion set of the seven major emotion face expression.


Sign in / Sign up

Export Citation Format

Share Document