scholarly journals Changes in ERPs with relevance between emotional state and facial expressions

Author(s):  
Hiromichi Takehara ◽  
Tatsuya Iwaki
Author(s):  
Kamal Naina Soni

Abstract: Human expressions play an important role in the extraction of an individual's emotional state. It helps in determining the current state and mood of an individual, extracting and understanding the emotion that an individual has based on various features of the face such as eyes, cheeks, forehead, or even through the curve of the smile. A survey confirmed that people use Music as a form of expression. They often relate to a particular piece of music according to their emotions. Considering these aspects of how music impacts a part of the human brain and body, our project will deal with extracting the user’s facial expressions and features to determine the current mood of the user. Once the emotion is detected, a playlist of songs suitable to the mood of the user will be presented to the user. This can be a big help to alleviate the mood or simply calm the individual and can also get quicker song according to the mood, saving time from looking up different songs and parallel developing a software that can be used anywhere with the help of providing the functionality of playing music according to the emotion detected. Keywords: Music, Emotion recognition, Categorization, Recommendations, Computer vision, Camera


2012 ◽  
Vol 19 (1) ◽  
pp. 3-13
Author(s):  
Rafael A. M. Gonçalves ◽  
Diego R. Cueva ◽  
Marcos R. Pereira-Barretto ◽  
Fabio G. Cozman

2012 ◽  
Vol 29 (5) ◽  
pp. 533-541 ◽  
Author(s):  
Sylvain Clément ◽  
Audrey Tonini ◽  
Fatiha Khatir ◽  
Loris Schiaratura ◽  
Séverine Samson

in this study, we examined short and longer term effects of musical and cooking interventions on emotional well-being of severe Alzheimer's disease (AD) patients. These two pleasurable activities (i.e., listening to music, tasting sweets) that were collectively performed (i.e., playing music together, collaborative preparation of a cake) were compared in two groups of matched patients with AD (N = 14). Each intervention lasted four weeks (two sessions per week) and their effects were regularly assessed up to four weeks after the end of the intervention. We repeatedly evaluated the emotional state of both groups before, during, and after the intervention periods by analyzing discourse content and facial expressions from short filmed interviews as well as caregivers' judgments of mood. The results reveal short-term benefits of both music and cooking interventions on emotional state on all these measures, but long-term benefits were only evident after the music intervention. The present finding suggests that non-pharmacological approaches offer promising methods to improve the quality of life of patients with dementia and that music stimulation is particularly effective to produce long lasting effects on patients' emotional well-being.


Author(s):  
Intars Nikonovs ◽  
Juris Grants ◽  
Ivars Kravalis

<p><em>The aim of the research was<strong> </strong>to evaluate emotional state before, after and in the next day after the ski hiking. The distance was 24 km and lasted 8 hours. To assess the ski hiker’s emotions the following 3 tests were conducted – before the ski hike, after the ski hike and 16 hours after the hike. Emotional state was set by the two different methods. One included assessing participant’s dynamic of emotional state with subjective measurement using questioner, but other using subjective method by analyzing person’s facial expressions.   The results showed improved emotional state in both ways. Although, using objective method, improved positive emotions were more in the next day of the ski hike. <strong></strong></em></p>


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


2021 ◽  
Author(s):  
Yael Hanein

Facial-expressions play a major role in human communication and provide a window to an individual’s emotional state. While facial expressions can be consciously manipulated to conceal true emotions, very brief leaked expressions may occur, exposing one’s true internal state. Leaked expressions are therefore considered as an important hallmark in deception detection, a field with enormous social and economic impact. Challengingly, capturing these subtle and brief expressions has been so far limited to visual examination (manual or machine-based), with almost no electromyography evidence. In this investigation we set to explore whether electromyography of leaked expressions can be faithfully recorded with specially designed wearable electrodes. Indeed, using soft multi-electrode array based facial electromyography, we were able to record localized and brief signals in individuals instructed to suppress smiles. The electromyography evidence was validated with high-speed video recordings. The recording approach reported here provides a new and sensitive tool for leaked expression investigations and a basis for improved automated systems.


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


2014 ◽  
Author(s):  
Shuichi Hashiguchi ◽  
Motoyasu Honma ◽  
Yoshiya Moriguchi ◽  
Kenichi Kuriyama ◽  
Takashi Tsuzuki

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2166
Author(s):  
Geesung Oh ◽  
Junghwan Ryu ◽  
Euiseok Jeong ◽  
Ji Hyun Yang ◽  
Sungwook Hwang ◽  
...  

In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.


2021 ◽  
Vol 11 (1) ◽  
pp. 106
Author(s):  
Ana R. Andreu-Perez ◽  
Mehrin Kiani ◽  
Javier Andreu-Perez ◽  
Pratusha Reddy ◽  
Jaime Andreu-Abela ◽  
...  

With an increase in consumer demand of video gaming entertainment, the game industry is exploring novel ways of game interaction such as providing direct interfaces between the game and the gamers’ cognitive or affective responses. In this work, gamer’s brain activity has been imaged using functional near infrared spectroscopy (fNIRS) whilst they watch video of a video game (League of Legends) they play. A video of the face of the participants is also recorded for each of a total of 15 trials where a trial is defined as watching a gameplay video. From the data collected, i.e., gamer’s fNIRS data in combination with emotional state estimation from gamer’s facial expressions, the expertise level of the gamers has been decoded per trial in a multi-modal framework comprising of unsupervised deep feature learning and classification by state-of-the-art models. The best tri-class classification accuracy is obtained using a cascade of random convolutional kernel transform (ROCKET) feature extraction method and deep classifier at 91.44%. This is the first work that aims at decoding expertise level of gamers using non-restrictive and portable technologies for brain imaging, and emotional state recognition derived from gamers’ facial expressions. This work has profound implications for novel designs of future human interactions with video games and brain-controlled games.


Sign in / Sign up

Export Citation Format

Share Document