scholarly journals Event-related EEG oscillatory responses elicited by dynamic facial expression

2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Tuba Aktürk ◽  
Tom A. de Graaf ◽  
Yasemin Abra ◽  
Sevilay Şahoğlu-Göktaş ◽  
Dilek Özkan ◽  
...  

Abstract Background Recognition of facial expressions (FEs) plays a crucial role in social interactions. Most studies on FE recognition use static (image) stimuli, even though real-life FEs are dynamic. FE processing is complex and multifaceted, and its neural correlates remain unclear. Transitioning from static to dynamic FE stimuli might help disentangle the neural oscillatory mechanisms underlying face processing and recognition of emotion expression. To our knowledge, we here present the first time–frequency exploration of oscillatory brain mechanisms underlying the processing of dynamic FEs. Results Videos of joyful, fearful, and neutral dynamic facial expressions were presented to 18 included healthy young adults. We analyzed event-related activity in electroencephalography (EEG) data, focusing on the delta, theta, and alpha-band oscillations. Since the videos involved a transition from neutral to emotional expressions (onset around 500 ms), we identified time windows that might correspond to face perception initially (time window 1; first TW), and emotion expression recognition subsequently (around 1000 ms; second TW). First TW showed increased power and phase-locking values for all frequency bands. In the first TW, power and phase-locking values were higher in the delta and theta bands for emotional FEs as compared to neutral FEs, thus potentially serving as a marker for emotion recognition in dynamic face processing. Conclusions Our time–frequency exploration revealed consistent oscillatory responses to complex, dynamic, ecologically meaningful FE stimuli. We conclude that while dynamic FE processing involves complex network dynamics, dynamic FEs were successfully used to reveal temporally separate oscillation responses related to face processing and subsequently emotion expression recognition.

Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
Junjie Pan ◽  
...  

Accurate recognition of facial expressions and emotional gestures is promising to understand the audience's feedback and engagement on the entertainment content. Existing methods are primarily based on various cameras or wearable sensors, which either raise privacy concerns or demand extra devices. To this aim, we propose a novel ubiquitous sensing system based on the commodity microphone array --- SonicFace, which provides an accessible, unobtrusive, contact-free, and privacy-preserving solution to monitor the user's emotional expressions continuously without playing hearable sound. SonicFace utilizes a pair of speaker and microphone array to recognize various fine-grained facial expressions and emotional hand gestures by emitted ultrasound and received echoes. Based on a set of experimental evaluations, the accuracy of recognizing 6 common facial expressions and 4 emotional gestures can reach around 80%. Besides, the extensive system evaluations with distinct configurations and an extended real-life case study have demonstrated the robustness and generalizability of the proposed SonicFace system.


2020 ◽  
Vol 33 (3-6) ◽  
pp. 113-138
Author(s):  
Audrey Masson ◽  
Guillaume Cazenave ◽  
Julien Trombini ◽  
Martine Batt

In recent years, due to its great economic and social potential, the recognition of facial expressions linked to emotions has become one of the most flourishing applications in the field of artificial intelligence, and has been the subject of many developments. However, despite significant progress, this field is still subject to many theoretical debates and technical challenges. It therefore seems important to make a general inventory of the different lines of research and to present a synthesis of recent results in this field. To this end, we have carried out a systematic review of the literature according to the guidelines of the PRISMA method. A search of 13 documentary databases identified a total of 220 references over the period 2014–2019. After a global presentation of the current systems and their performance, we grouped and analyzed the selected articles in the light of the main problems encountered in the field of automated facial expression recognition. The conclusion of this review highlights the strengths, limitations and main directions for future research in this field.


2011 ◽  
Vol 268-270 ◽  
pp. 471-475
Author(s):  
Sungmo Jung ◽  
Seoksoo Kim

Many 3D films use technologies of facial expression recognition. In order to use the existing technologies, a large number of markers shall be attached to a face, a camera is fixed in front of the face, and movements of the markers are calculated. However, the markers calculate only the changes in regions where the markers are attached, which makes difficult realistic recognition of facial expressions. Therefore, this study extracted a preliminary eye region in 320*240 by defining specific location values of the eye. And the final eye region was selected from the preliminary region. This study suggests an improved method of detecting an eye region, reducing errors arising from noise.


Author(s):  
Esam Taha Yassen ◽  
Alaa Abdulkhar Jihad ◽  
Sudad H. Abed

<span>Over the last decade, many nature-inspired algorithms have been received considerable attention among practitioners and researchers to handle several optimization problems. Lion optimization algorithm (LA) is inspired by a distinctive lifestyle of lions and their collective behavior in their social groups. LA has been presented as a powerful optimization algorithm to solve various optimization problems. In this paper, the LA is proposed to investigate its performance in solving one of the most popular and widespread real-life optimization problems called team orienteering problem with time windows (TOPTW). However, as any population-based metaheuristic, the LA is very efficient in exploring the search space, but inefficient in exploiting it. So, this paper proposes enhancing LA to tackle the TOPTW by utilizing its strong ability to explore the search space and improving its exploitation ability. This enhancement is achieved via improving a process of territorial defense to generate a trespass strong nomadic lion to prevail a pride by fighting its males. As a result of this improving process, an enhanced LA (ILA) emerged. The obtained solutions have been compared with the best known and standard results obtained in the former studies. The conducted experimental test verifies the effectiveness of the ILA in solving the TOPTW as it obtained a very competitive results compared to the LA and the state-of-the-art methods across all tested instances.</span>


Author(s):  
Mahima Agrawal ◽  
Shubangi. D. Giripunje ◽  
P. R. Bajaj

This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 1076-1079

Automated facial expression recognition can greatly improve the human–machine interface. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. In this research, we enhanced Convolutional Neural Network method to recognize 6 basic emotions and compared some pre processing methods to show the influences of its in CNN performance. The preprocessing methods are :resizing, mean, normalization, standard deviation, scaling and edge detection . Face detection as single pre-processing phase achieved significant result with 100 % of accuracy, compared with another pre-processing phase and raw data.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. WA123-WA135 ◽  
Author(s):  
Carl Reine ◽  
Mirko van der Baan ◽  
Roger Clark

Frequency-based methods for measuring seismic attenuation are used commonly in exploration geophysics. To measure the spectrum of a nonstationary seismic signal, different methods are available, including transforms with time windows that are either fixed or systematically varying with the frequency being analyzed. We compare four time-frequency transforms and show that the choice of a fixed- or variable-window transform affects the robustness and accuracy of the resulting attenuation measurements. For fixed-window transforms, we use the short-time Fourier transform and Gabor transform. The S-transform and continuous wavelet transform are analyzed as the variable-length transforms. First we conduct a synthetic transmission experiment, and compare the frequency-dependent scattering attenuation to the theoretically predicted values. From this procedure, we find that variable-window transforms reduce the uncertainty and biasof the resulting attenuation estimate, specifically at the upper and lower ends of the signal bandwidth. Our second experiment measures attenuation from a zero-offset reflection synthetic using a linear regression of spectral ratios. Estimates for constant-[Formula: see text] attenuation obtained with the variable-window transforms depend less on the choice of regression bandwidth, resulting in a more precise attenuation estimate. These results are repeated in our analysis of surface seismic data, whereby we also find that the attenuation measurements made by variable-window transforms have a stronger match to their expected trend with offset. We conclude that time-frequency transforms with a systematically varying time window, such as the S-transform and continuous wavelet transform, allow for more robust estimates of seismic attenuation. Peaks and notches in the measured spectrum are reduced because the analyzed primary signal is better isolated from the coda, and because of high-frequency spectral smoothing implicit in the use of short-analysis windows.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gilles Vannuscorps ◽  
Michael Andres ◽  
Alfonso Caramazza

What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


Sign in / Sign up

Export Citation Format

Share Document