emotion detection
Recently Published Documents


TOTAL DOCUMENTS

853
(FIVE YEARS 419)

H-INDEX

28
(FIVE YEARS 9)

2022 ◽  
Vol 73 ◽  
pp. 103407
Author(s):  
Vaishali M. Joshi ◽  
Rajesh B. Ghongade ◽  
Aditi M. Joshi ◽  
Rushikesh V. Kulkarni

2022 ◽  
Vol 3 (2) ◽  
pp. 1-22
Author(s):  
Ye Gao ◽  
Asif Salekin ◽  
Kristina Gordon ◽  
Karen Rose ◽  
Hongning Wang ◽  
...  

The rapid development of machine learning on acoustic signal processing has resulted in many solutions for detecting emotions from speech. Early works were developed for clean and acted speech and for a fixed set of emotions. Importantly, the datasets and solutions assumed that a person only exhibited one of these emotions. More recent work has continually been adding realism to emotion detection by considering issues such as reverberation, de-amplification, and background noise, but often considering one dataset at a time, and also assuming all emotions are accounted for in the model. We significantly improve realistic considerations for emotion detection by (i) more comprehensively assessing different situations by combining the five common publicly available datasets as one and enhancing the new dataset with data augmentation that considers reverberation and de-amplification, (ii) incorporating 11 typical home noises into the acoustics, and (iii) considering that in real situations a person may be exhibiting many emotions that are not currently of interest and they should not have to fit into a pre-fixed category nor be improperly labeled. Our novel solution combines CNN with out-of-data distribution detection. Our solution increases the situations where emotions can be effectively detected and outperforms a state-of-the-art baseline.


2022 ◽  
Vol 12 (2) ◽  
pp. 807
Author(s):  
Huafei Xiao ◽  
Wenbo Li ◽  
Guanzhong Zeng ◽  
Yingzhang Wu ◽  
Jiyong Xue ◽  
...  

With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.


Author(s):  
Yashwanth D

Automatic Face Detection innovations have made numerous upgrades in evolving world. Brilliant ATTENDANCE SYSTEM utilizing ongoing face acknowledgment is a genuine world arrangement which accompanies everyday exercises of taking care of understudies participation. The administration of participation framework can be an extraordinary weight on educators in case it is finished by hands.To determine this issue we utilize auto and brilliant participation framework which is by and large executed with the assistance of biometric called Face Detection. The primary execution steps utilized in this kind of framework are face location and perceiving the identified countenances. Face Detection is an interaction where the framework will actually want to recognize the human faces which will be caught by the camera. Here , we execute a computerized participation the board framework for understudies of the class by utilizing face acknowledgment method..


2022 ◽  
Author(s):  
Manuela Filippa ◽  
Doris Lima ◽  
Alicia Grandjean ◽  
Carolina Labbé ◽  
Selim Coll ◽  
...  

Abstract Background: Emotional prosody is the result of the dynamic variation of acoustical non-verbal aspects of language that allow people to convey and recognize emotions. Understanding how this recognition develops during childhood to adolescence is the goal of the present paper. We also aim to test the maturation of the ability to perceive mixed emotions in voice. Methods: We tested 133 children and adolescents, aged between 6 and 17 years old, exposed to 4 kinds of emotional (anger, fear, happiness, and sadness) and neutral linguistic meaningless stimuli. Participants were asked to judge the type and degree of perceived emotion on continuous scales. Results: By means of a general linear mixed model analysis, as predicted, a significant interaction between age and emotion was found. The ability to recognize emotions significantly increased with age for all emotional and neutral vocalizations. Girls recognized anger better than boys, who instead confused fear with neutral prosody more than girls did. Across all ages, only marginally significant differences were found between anger, happiness, and neutral versus sadness, which was more difficult to recognize. Finally, as age increased, participants were significantly more likely to attribute mixed emotions to emotional prosody, showing the progressive complexification of the emotional content representation that young adults perceived in emotional prosody. Conclusions: The ability to identify basic emotions from linguistically meaningless stimuli develops from childhood to adolescence. Interestingly, this maturation was not only evidenced in the accuracy of emotion detection, but also in a complexification of emotion attribution in prosody.


2022 ◽  
Vol 31 (1) ◽  
pp. 113-126
Author(s):  
Jia Guo

Abstract Emotional recognition has arisen as an essential field of study that can expose a variety of valuable inputs. Emotion can be articulated in several means that can be seen, like speech and facial expressions, written text, and gestures. Emotion recognition in a text document is fundamentally a content-based classification issue, including notions from natural language processing (NLP) and deep learning fields. Hence, in this study, deep learning assisted semantic text analysis (DLSTA) has been proposed for human emotion detection using big data. Emotion detection from textual sources can be done utilizing notions of Natural Language Processing. Word embeddings are extensively utilized for several NLP tasks, like machine translation, sentiment analysis, and question answering. NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The numerical outcomes demonstrate that the suggested method achieves an expressively superior quality of human emotion detection rate of 97.22% and the classification accuracy rate of 98.02% with different state-of-the-art methods and can be enhanced by other emotional word embeddings.


2021 ◽  
Author(s):  
Afia Fairoose Abedin ◽  
Amirul Islam Al Mamun ◽  
Rownak Jahan Nowrin ◽  
Amitabha Chakrabarty ◽  
Moin Mostakim ◽  
...  

In recent times, a large number of people have been involved in establishing their own businesses. Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second. Though chatbots perform well in task-oriented activities, in most cases they fail to understand personalized opinions, statements or even queries which later impact the organization for poor service management. Lack of understanding capabilities in bots disinterest humans to continue conversations with them. Usually, chatbots give absurd responses when they are unable to interpret a user’s text accurately. Extracting the client reviews from conversations by using chatbots, organizations can reduce the major gap of understanding between the users and the chatbot and improve their quality of products and services.Thus, in our research we incorporated all the key elements that are necessary for a chatbot to analyse andunderstand an input text precisely and accurately. We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence. The efficiency of our approach can be demonstrated accordingly by the detailed analysis.


Sign in / Sign up

Export Citation Format

Share Document