scholarly journals Affective Latent Representation of Acoustic and Lexical Features for Emotion Recognition

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2614
Author(s):  
Eesung Kim ◽  
Hyungchan Song ◽  
Jong Won Shin

In this paper, we propose a novel emotion recognition method based on the underlying emotional characteristics extracted from a conditional adversarial auto-encoder (CAAE), in which both acoustic and lexical features are used as inputs. The acoustic features are generated by calculating statistical functionals of low-level descriptors and by a deep neural network (DNN). These acoustic features are concatenated with three types of lexical features extracted from the text, which are a sparse representation, a distributed representation, and an affective lexicon-based dimensions. Two-dimensional latent representations similar to vectors in the valence-arousal space are obtained by a CAAE, which can be directly mapped into the emotional classes without the need for a sophisticated classifier. In contrast to the previous attempt to a CAAE using only acoustic features, the proposed approach could enhance the performance of the emotion recognition because combined acoustic and lexical features provide enough discriminant power. Experimental results on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus showed that our method outperformed the previously reported best results on the same corpus, achieving 76.72% in the unweighted average recall.

2012 ◽  
Vol 169 (4) ◽  
pp. 424-432 ◽  
Author(s):  
Rinat Gold ◽  
Pamela Butler ◽  
Nadine Revheim ◽  
David I. Leitman ◽  
John A. Hansen ◽  
...  

2014 ◽  
Vol 989-994 ◽  
pp. 3851-3855
Author(s):  
Guang Jin Lai

Digital X-ray photography technology is under the control of the computer, to use one-dimensional or 2D X-ray detector to convert the captured image into digital signals directly to using image processing technology. It can realize the function of image analysis. We introduce X-ray photography technology into the terminal identification in track and field, and use the clustering algorithm to improve computer image clustering algorithm. Through capturing the digital signal of human head, arms and legs, it enhances the terminal recognition method in track and field. Finally we use MATLAB to calculate the captured image value of X-ray photography. Through calculation, motion capture and recognition of X-ray image are enhanced obviously. It provides a theoretical basis for researching on motion capture technology in track and field.


2017 ◽  
Author(s):  
◽  
Zeshan Peng

With the advancement of machine learning methods, audio sentiment analysis has become an active research area in recent years. For example, business organizations are interested in persuasion tactics from vocal cues and acoustic measures in speech. A typical approach is to find a set of acoustic features from audio data that can indicate or predict a customer's attitude, opinion, or emotion state. For audio signals, acoustic features have been widely used in many machine learning applications, such as music classification, language recognition, emotion recognition, and so on. For emotion recognition, previous work shows that pitch and speech rate features are important features. This thesis work focuses on determining sentiment from call center audio records, each containing a conversation between a sales representative and a customer. The sentiment of an audio record is considered positive if the conversation ended with an appointment being made, and is negative otherwise. In this project, a data processing and machine learning pipeline for this problem has been developed. It consists of three major steps: 1) an audio record is split into segments by speaker turns; 2) acoustic features are extracted from each segment; and 3) classification models are trained on the acoustic features to predict sentiment. Different set of features have been used and different machine learning methods, including classical machine learning algorithms and deep neural networks, have been implemented in the pipeline. In our deep neural network method, the feature vectors of audio segments are stacked in temporal order into a feature matrix, which is fed into deep convolution neural networks as input. Experimental results based on real data shows that acoustic features, such as Mel frequency cepstral coefficients, timbre and Chroma features, are good indicators for sentiment. Temporal information in an audio record can be captured by deep convolutional neural networks for improved prediction accuracy.


Author(s):  
Sebastijan Dumancic ◽  
Hendrik Blockeel

The goal of unsupervised representation learning is to extract a new representation of data, such that solving many different tasks becomes easier. Existing methods typically focus on vectorized data and offer little support for relational data, which additionally describes relationships among instances. In this work we introduce an approach for relational unsupervised representation learning. Viewing a relational dataset as a hypergraph, new features are obtained by clustering vertices and hyperedges. To find a representation suited for many relational learning tasks, a wide range of similarities between relational objects is considered, e.g. feature and structural similarities. We experimentally evaluate the proposed approach and show that models learned on such latent representations perform better, have lower complexity, and outperform the existing approaches on classification tasks.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 403
Author(s):  
Yajurv Bhatia ◽  
ASM Hossain Bari ◽  
Gee-Sern Jison Hsu ◽  
Marina Gavrilova

Motion capture sensor-based gait emotion recognition is an emerging sub-domain of human emotion recognition. Its applications span a variety of fields including smart home design, border security, robotics, virtual reality, and gaming. In recent years, several deep learning-based approaches have been successful in solving the Gait Emotion Recognition (GER) problem. However, a vast majority of such methods rely on Deep Neural Networks (DNNs) with a significant number of model parameters, which lead to model overfitting as well as increased inference time. This paper contributes to the domain of knowledge by proposing a new lightweight bi-modular architecture with handcrafted features that is trained using a RMSprop optimizer and stratified data shuffling. The method is highly effective in correctly inferring human emotions from gait, achieving a micro-mean average precision of 0.97 on the Edinburgh Locomotive Mocap Dataset. It outperforms all recent deep-learning methods, while having the lowest inference time of 16.3 milliseconds per gait sample. This research study is beneficial to applications spanning various fields, such as emotionally aware assistive robotics, adaptive therapy and rehabilitation, and surveillance.


2021 ◽  
Author(s):  
Hugo Mitre-Hernandez ◽  
Rodolfo Ferro-Perez ◽  
Francisco Gonzalez-Hernandez

BACKGROUND Mental health effects during COVID-19 quarantine need to be handled because patients, relatives, and healthcare workers are living with negative emotional behaviors. The clinical disorders of depression and anxiety are evoking anger, fear, sadness, disgust, and reducing happiness. Therefore, track emotions with the help of psychologists on online consultations –to reduce the risk of contagion– will go a long way in assisting with mental health. The human micro-expressions can describe genuine emotions of people and can be captured by Deep Neural Networks (DNNs) models. But the challenge is to implement it under the poor performance of a part of society's computers and the low speed of internet connection. OBJECTIVE This study aimed to create a useful and usable web application to record emotions in a patient’s card in real-time, achieving a small data transfer, and a Convolutional Neural Networks (CNN) model with a low computational cost. METHODS To validate the low computational cost premise, firstly, we compare DNN architectures results, collecting the floating-point operations per second (FLOPS), the Number of Parameters (NP) and accuracy from the MobileNet, PeleeNet, Extended Deep Neural Network (EDNN), Inception- Based Deep Neural Network (IDNN) and our proposed Residual mobile-based Network (ResmoNet) model. Secondly, we compare the trained models' results in terms of Main Memory Utilization (MMU) and Response Time to complete the Emotion recognition (RTE). Finally, we design a data transfer that includes the raw data of emotions and the basic text information of the patient. The web application was evaluated with the System Usability Scale (SUS) and a utility questionnaire by psychologists and psychiatrists (experts). RESULTS All CNN models were set up using 150 epochs for training and testing comparing the results for each variable in ResmoNet with the best model. It was obtained that ResmoNet has 115,976 NP less than MobileNet, 243,901 FLOPS less than MobileNet, and 5% less accuracy than EDNN (95%). Moreover, ResmoNet used less MMU than any model, only EDNN overcomes ResmoNet in 0.01 seconds for RTE. Finally, with our model, we develop a web application to collect emotions in real-time during a psychological consultation. For data transfer, the patient’s card and raw emotional data have 2 kb with a UTF-8 encoding approximately. Finally, according to the experts, the web application has good usability (73.8 of 100) and utility (3.94 of 5). CONCLUSIONS A usable and useful web application for psychologists and psychiatrists is presented. This tool includes an efficient and light facial emotion recognition model. Its purpose is to be a complementary tool for diagnostic processes.


Sign in / Sign up

Export Citation Format

Share Document