scholarly journals Recognition of Human Emotion from a Speech Signal Based on Plutchik's Model

2012 ◽  
Vol 58 (2) ◽  
pp. 165-170 ◽  
Author(s):  
Dorota Kamińska ◽  
Adam Pelikant

Recognition of Human Emotion from a Speech Signal Based on Plutchik's ModelMachine recognition of human emotional states is an essential part in improving man-machine interaction. During expressive speech the voice conveys semantic message as well as the information about emotional state of the speaker. The pitch contour is one of the most significant properties of speech, which is affected by the emotional state. Therefore pitch features have been commonly used in systems for automatic emotion detection. In this work different intensities of emotions and their influence on pitch features have been studied. This understanding is important to develop such a system. Intensities of emotions are presented on Plutchik's cone-shaped 3D model. ThekNearest Neighbor algorithm has been used for classification. The classification has been divided into two parts. First, the primary emotion has been detected, then its intensity has been specified. The results show that the recognition accuracy of the system is over 50% for primary emotions, and over 70% for its intensities.

2011 ◽  
pp. 175-200 ◽  
Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


2014 ◽  
Vol 27 (3) ◽  
pp. 375-387 ◽  
Author(s):  
Vlado Delic ◽  
Milan Gnjatovic ◽  
Niksa Jakovljevic ◽  
Branislav Popovic ◽  
Ivan Jokic ◽  
...  

This paper considers the research question of developing user-aware and adaptive conversational agents. The conversational agent is a system which is user-aware to the extent that it recognizes the user identity and his/her emotional states that are relevant in a given interaction domain. The conversational agent is user-adaptive to the extent that it dynamically adapts its dialogue behavior according to the user and his/her emotional state. The paper summarizes some aspects of our previous work and presents work-in-progress in the field of speech-based human-machine interaction. It focuses particularly on the development of speech recognition modules in cooperation with both modules for emotion recognition and speaker recognition, as well as the dialogue management module. Finally, it proposes an architecture of a conversational agent that integrates those modules and improves each of them based on some kind of synergies among themselves.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


Author(s):  
Mohammed R. Elkobaisi ◽  
Fadi Al Machot

AbstractThe use of IoT-based Emotion Recognition (ER) systems is in increasing demand in many domains such as active and assisted living (AAL), health care and industry. Combining the emotion and the context in a unified system could enhance the human support scope, but it is currently a challenging task due to the lack of a common interface that is capable to provide such a combination. In this sense, we aim at providing a novel approach based on a modeling language that can be used even by care-givers or non-experts to model human emotion w.r.t. context for human support services. The proposed modeling approach is based on Domain-Specific Modeling Language (DSML) which helps to integrate different IoT data sources in AAL environment. Consequently, it provides a conceptual support level related to the current emotional states of the observed subject. For the evaluation, we show the evaluation of the well-validated System Usability Score (SUS) to prove that the proposed modeling language achieves high performance in terms of usability and learn-ability metrics. Furthermore, we evaluate the performance at runtime of the model instantiation by measuring the execution time using well-known IoT services.


2021 ◽  
Author(s):  
Natalia Albuquerque ◽  
Daniel S. Mills ◽  
Kun Guo ◽  
Anna Wilkinson ◽  
Briseida Resende

AbstractThe ability to infer emotional states and their wider consequences requires the establishment of relationships between the emotional display and subsequent actions. These abilities, together with the use of emotional information from others in social decision making, are cognitively demanding and require inferential skills that extend beyond the immediate perception of the current behaviour of another individual. They may include predictions of the significance of the emotional states being expressed. These abilities were previously believed to be exclusive to primates. In this study, we presented adult domestic dogs with a social interaction between two unfamiliar people, which could be positive, negative or neutral. After passively witnessing the actors engaging silently with each other and with the environment, dogs were given the opportunity to approach a food resource that varied in accessibility. We found that the available emotional information was more relevant than the motivation of the actors (i.e. giving something or receiving something) in predicting the dogs’ responses. Thus, dogs were able to access implicit information from the actors’ emotional states and appropriately use the affective information to make context-dependent decisions. The findings demonstrate that a non-human animal can actively acquire information from emotional expressions, infer some form of emotional state and use this functionally to make decisions.


Semiotica ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Amitash Ojha ◽  
Charles Forceville ◽  
Bipin Indurkhya

Abstract Both mainstream and art comics often use various flourishes surrounding characters’ heads. These so-called “pictorial runes” (also called “emanata”) help convey the emotional states of the characters. In this paper, using (manipulated) panels from Western and Indian comic albums as well as neutral emoticons and basic shapes in different colors, we focus on the following two issues: (a) whether runes increase the awareness in comics readers about the emotional state of the character; and (b) whether a correspondence can be found between the types of runes (twirls, spirals, droplets, and spikes) and specific emotions. Our results show that runes help communicate emotion. Although no one-to-one correspondence was found between the tested runes and specific emotions, it was found that droplets and spikes indicate generic emotions, spirals indicate negative emotions, and twirls indicate confusion and dizziness.


2021 ◽  
Author(s):  
Olga A. Loskutova ◽  
Anastasia. V. Nenko ◽  
Yana. A. Berg ◽  
Daria V. Borovikova ◽  
Anton V. Yupashevsky

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Jose Maria Garcia-Garcia ◽  
Víctor M. R. Penichet ◽  
María Dolores Lozano ◽  
Juan Enrique Garrido ◽  
Effie Lai-Chong Law

Affective computing is becoming more and more important as it enables to extend the possibilities of computing technologies by incorporating emotions. In fact, the detection of users’ emotions has become one of the most important aspects regarding Affective Computing. In this paper, we present an educational software application that incorporates affective computing by detecting the users’ emotional states to adapt its behaviour to the emotions sensed. This way, we aim at increasing users’ engagement to keep them motivated for longer periods of time, thus improving their learning progress. To prove this, the application has been assessed with real users. The performance of a set of users using the proposed system has been compared with a control group that used the same system without implementing emotion detection. The outcomes of this evaluation have shown that our proposed system, incorporating affective computing, produced better results than the one used by the control group.


Sign in / Sign up

Export Citation Format

Share Document