scholarly journals The emotional state through visual expression, auditory expression and physiological representation

2021 ◽  
Vol 119 ◽  
pp. 05008
Author(s):  
Benyoussef Abdellaoui ◽  
Aniss Moumen ◽  
Younes El Bouzekri El Idrissi ◽  
Ahmed Remaida

As emotional content reflects human behaviour, automatic emotion recognition is a topic of growing interest. During the communication of an emotional message, the use of physiological signals and facial expressions gives several advantages that can be expected to understand a person’s personality and psychopathology better and determine human communication and human-machine interaction. In this article, we will present some notions about identifying the emotional state through visual expression, auditory expression and physiological representation, and the techniques used to measure emotions.

Author(s):  
Rama Chaudhary ◽  
Ram Avtar Jaswal

In modern time, the human-machine interaction technology has been developed so much for recognizing human emotional states depending on physiological signals. The emotional states of human can be recognized by using facial expressions, but sometimes it doesn’t give accurate results. For example, if we detect the accuracy of facial expression of sad person, then it will not give fully satisfied result because sad expression also include frustration, irritation, anger, etc. therefore, it will not be possible to determine the particular expression. Therefore, emotion recognition using Electroencephalogram (EEG), Electrocardiogram (ECG) has gained so much attraction because these are based on brain and heart signals respectively. So, after analyzing all the factors, it is decided to recognize emotional states based on EEG using DEAP Dataset. So that, the better accuracy can be achieved.


Author(s):  
Sanghamitra Mohanty ◽  
Basanta Kumar Swain

Communication will be intelligible when conveyed message is interpreted in right-minded. Unfortunately, the rightminded interpretation of communicated message is possible for human-human communication but it’s laborious for humanmachine communication. It is due to the inherently blending of non-verbal contents such as emotion in vocal communication which leads to difficulty in human-machine interaction. In this research paper we have performed experiment to recognize emotions like anger, sadness, astonish, fear, happiness and neutral using fuzzy K-Means algorithm from Oriya elicited speech collected from 35 Oriya speaking people aged between 22- 58 years belonging to different provinces of Orissa. We have achieved the accuracy of 65.16% in recognizing above six mentioned emotions by incorporating mean pitch, first two formants, jitter, shimmer and energy as feature vectors for this research work. Emotion recognition has many vivid applications in different domains like call centers, spoken tutoring systems, spoken dialogue research, human-robotic interfaces etc.


2021 ◽  
Vol 8 (5) ◽  
pp. 949
Author(s):  
Fitra A. Bachtiar ◽  
Muhammad Wafi

<p><em>Human machine interaction</em>, khususnya pada <em>facial</em> <em>behavior</em> mulai banyak diperhatikan untuk dapat digunakan sebagai salah satu cara untuk personalisasi pengguna. Kombinasi ekstraksi fitur dengan metode klasifikasi dapat digunakan agar sebuah mesin dapat mengenali ekspresi wajah. Akan tetapi belum diketahui basis metode klasifikasi apa yang tepat untuk digunakan. Penelitian ini membandingkan tiga metode klasifikasi untuk melakukan klasifikasi ekspresi wajah. Dataset ekspresi wajah yang digunakan pada penelitian ini adalah JAFFE dataset dengan total 213 citra wajah yang menunjukkan 7 (tujuh) ekspresi wajah. Ekspresi wajah pada dataset tersebut yaitu <em>anger</em>, <em>disgust</em>, <em>fear</em>, <em>happy</em>, <em>neutral</em>, <em>sadness</em>, dan <em>surprised</em>. Facial Landmark digunakan sebagai ekstraksi fitur wajah. Model klasifikasi yang digunakan pada penelitian ini adalah ELM, SVM, dan <em>k</em>-NN. Masing masing model klasifikasi akan dicari nilai parameter terbaik dengan menggunakan 80% dari total data. 5- <em>fold</em> <em>cross-validation</em> digunakan untuk mencari parameter terbaik. Pengujian model dilakukan dengan 20% data dengan metode evaluasi akurasi, F1 Score, dan waktu komputasi. Nilai parameter terbaik pada ELM adalah menggunakan 40 hidden neuron, SVM dengan nilai  = 10<sup>5</sup> dan 200 iterasi, sedangkan untuk <em>k</em>-NN menggunakan 3 <em>k</em> tetangga. Hasil uji menggunakan parameter tersebut menunjukkan ELM merupakan algoritme terbaik diantara ketiga model klasifikasi tersebut. Akurasi dan F1 Score untuk klasifikasi ekspresi wajah untuk ELM mendapatkan nilai akurasi sebesar 0.76 dan F1 Score 0.76, sedangkan untuk waktu komputasi membutuhkan waktu 6.97´10<sup>-3</sup> detik.   </p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract">H<em>uman-machine interaction, especially facial behavior is considered to be use in user personalization. Feature extraction and classification model combinations can be used for a machine to understand the human facial expression. However, which classification base method should be used is not yet known. This study compares three classification methods for facial expression recognition. JAFFE dataset is used in this study with a total of 213 facial images which shows seven facial expressions. The seven facial expressions are anger, disgust, fear, happy, neutral, sadness, dan surprised. Facial Landmark is used as a facial component features. The classification model used in this study is ELM, SVM, and k-NN. The hyperparameter of each model is searched using 80% of the total data. 5-fold cross-validation is used to find the hyperparameter. The testing is done using 20% of the data and evaluated using accuracy, F1 Score, and computation time. The hyperparameter for ELM is 40 hidden neurons, SVM with  = 105 and 200 iteration, while k-NN used 3 k neighbors. The experiment results show that ELM outperforms other classification methods. The accuracy and F1 Score achieved by ELM is 0.76 and 0.76, respectively. Meanwhile, time computation takes 6.97 10<sup>-3</sup> seconds.      </em></p>


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


Author(s):  
Kostas Karpouzis ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Amaryllis Raouzaiou ◽  
Nicolas Tsapatsoulis ◽  
...  

Emotionally-aware Man-Machine Interaction (MMI) systems are presently at the forefront of interest of the computer vision and artificial intelligence communities, since they give the opportunity to less technology-aware people to use computers more efficiently, overcoming fears and preconceptions. Most emotion-related facial and body gestures are considered to be universal, in the sense that they are recognized along different cultures; therefore, the introduction of an “emotional dictionary” that includes descriptions and perceived meanings of facial expressions and body gestures, so as to help infer the likely emotional state of a specific user, can enhance the affective nature of MMI applications (Picard, 2000).


2019 ◽  
Vol 23 (3) ◽  
pp. 312-323
Author(s):  
F G Maylenova

Mechanisms that help people in their lives have existed for centuries, and every year they become not only more and more complex and perfect, but also smarter. It is impossible to imagine modern production without smart machines, but today, with the advent of robotic android robots, their participation in our private lives and, consequently, their influence on us, becomes much more obvious. After all, the robots that are increasingly taking root in our lives today, are no longer perceived by us simply as mechanisms, we endow them with human properties of character and often experience different emotions in relation to them. The appearance of a robot capable of experiencing (or still imitating?) emotions can be considered a qualitatively new step in the life of a person with robots. With such robots, it will be possible to be friends with them, to look for (and probably get) support from them. It is expected that they will be able to brighten up the loneliness of a variety of people, including disabled people, lonely old people, to help in caring for the sick and at the same time entertain them with communication. Speaking about the relationship with robots, it is difficult not to mention such an important aspect of human communication as sex, which, on the one hand, is not only a need, as in all living beings, but also the highest form of human love and intimacy, and on the other - can exist and be satisfied completely separate from love. It is this duality of human nature that has contributed to the transformation of sex and the human body into a commodity and the development of prostitution, pornography and the use of sexual images in advertising. The emergence of android robots can radically change our lives, including the most intimate areas of life. The development of the artificial intelligence sex industry opens up a whole new era of human-machine interaction. When smart machines become not only comfortable and entertaining, but literally enter our flesh, become our interlocutors, friends and lovers who share our feelings and interests, what will be the consequences of this unprecedented intimacy between man and machine?


Emotion recognition is a rapidly growing research field. Emotions can be effectively expressed through speech and can provide insight about speaker’s intentions. Although, humans can easily interpret emotions through speech, physical gestures, and eye movement but to train a machine to do the same with similar preciseness is quite a challenging task. SER systems can improve human-machine interaction when used with automatic speech recognition, as emotions have the tendency to change the semantics of a sentence. Many researchers have contributed their extremely impressive work in this research area, leading to development of numerous classification, feature selection, feature extraction and emotional speech databases. This paper reviews recent accomplishments in the area of speech emotion recognition. It also present a detailed review of various types of emotional speech databases, and different classification techniques which can be used individually or in combination and a brief description of various speech features for emotion recognition.


2021 ◽  
Author(s):  
Yael Hanein

Facial-expressions play a major role in human communication and provide a window to an individual’s emotional state. While facial expressions can be consciously manipulated to conceal true emotions, very brief leaked expressions may occur, exposing one’s true internal state. Leaked expressions are therefore considered as an important hallmark in deception detection, a field with enormous social and economic impact. Challengingly, capturing these subtle and brief expressions has been so far limited to visual examination (manual or machine-based), with almost no electromyography evidence. In this investigation we set to explore whether electromyography of leaked expressions can be faithfully recorded with specially designed wearable electrodes. Indeed, using soft multi-electrode array based facial electromyography, we were able to record localized and brief signals in individuals instructed to suppress smiles. The electromyography evidence was validated with high-speed video recordings. The recording approach reported here provides a new and sensitive tool for leaked expression investigations and a basis for improved automated systems.


Sign in / Sign up

Export Citation Format

Share Document