Machine Learning for Affective Computing

Author(s):  
Rafael Calvo ◽  
Sidney D'Mello ◽  
Jonathan Gratch ◽  
Arvid Kappas ◽  
Ashish Kapoor
Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


Author(s):  
Junjie Bai ◽  
Kan Luo ◽  
Jun Peng ◽  
Jinliang Shi ◽  
Ying Wu ◽  
...  

Music emotions recognition (MER) is a challenging field of studies addressed in multiple disciplines such as musicology, cognitive science, physiology, psychology, arts and affective computing. In this article, music emotions are classified into four types known as those of pleasing, angry, sad and relaxing. MER is formulated as a classification problem in cognitive computing where 548 dimensions of music features are extracted and modeled. A set of classifications and machine learning algorithms are explored and comparatively studied for MER, which includes Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Neuro-Fuzzy Networks Classification (NFNC), Fuzzy KNN (FKNN), Bayes classifier and Linear Discriminant Analysis (LDA). Experimental results show that the SVM, FKNN and LDA algorithms are the most effective methodologies that obtain more than 80% accuracy for MER.


Author(s):  
Heath Yates ◽  
Brent Chamberlain ◽  
William Baldwin ◽  
William H. Hsu ◽  
Dana Vanlandingham

Affective computing is a very active and young field. It is driven by several promising areas that could benefit from affective intelligence such as virtual reality, smart surveillance, perceptual interfaces, and health. This chapter suggests new design for the detection of animal affect and emotion under an affective computing framework via mobile sensors and machine learning. The authors review existing literature and suggest new use cases by conceptual reevaluation of existing work done in affective computing and animal sensors.


2019 ◽  

Inhalt Plenarvorträge Produktentwicklung einmal anders – effizient, flexibel, agil! Dr. rer. nat. S. Lambertz, Freudenberg Technology Innovation, Weinheim 1 Spreu und Weizen – Welche Automobilzulieferer schaffen den Strukturwandel, welche nicht? M.-R. Faerber, Managing Partner der Struktur Management Partner GmbH, Köln 7 Wenn Sinneswahrnehmungen digital werden und Technik fühlen lernt – Trends und Anwendungen des Affective Computing Dr.-Ing. J. Garbas, Fraunhofer IIS, Erlangen 9 Kurzberichte aus der Forschung Machine Learning zur Erkennung von Veränderungen beim Spritzgiessprozess Prof. Dr. F. Ehrig, Prof. Dr. G. Schuster, HSR Hochschule für Technik Rapperswil, Rapperswil, Schweiz 19 Steigerung von Produkt- und Prozessqualität beim Spritzgießen durch künstliche Intelligenz M.Sc. A. Schulze Struchtrup, M.Sc. M. Janßen, Prof. Dr.-Ing. R. Schiffers, Institut für Produkt Engineering, Universität Duisburg-Essen 27 I4.0 Pilotfabrik für die smarte Kunststoffverarbeitung Prof...


Author(s):  
Nazanin Fouladgar ◽  
Marjan Alirezaie ◽  
Kary Främling

AbstractAffective computing solutions, in the literature, mainly rely on machine learning methods designed to accurately detect human affective states. Nevertheless, many of the proposed methods are based on handcrafted features, requiring sufficient expert knowledge in the realm of signal processing. With the advent of deep learning methods, attention has turned toward reduced feature engineering and more end-to-end machine learning. However, most of the proposed models rely on late fusion in a multimodal context. Meanwhile, addressing interrelations between modalities for intermediate-level data representation has been largely neglected. In this paper, we propose a novel deep convolutional neural network, called CN-Waterfall, consisting of two modules: Base and General. While the Base module focuses on the low-level representation of data from each single modality, the General module provides further information, indicating relations between modalities in the intermediate- and high-level data representations. The latter module has been designed based on theoretically grounded concepts in the Explainable AI (XAI) domain, consisting of four different fusions. These fusions are mainly tailored to correlation- and non-correlation-based modalities. To validate our model, we conduct an exhaustive experiment on WESAD and MAHNOB-HCI, two publicly and academically available datasets in the context of multimodal affective computing. We demonstrate that our proposed model significantly improves the performance of physiological-based multimodal affect detection.


Author(s):  
Mohammed Hoque ◽  
Daniel J. McDuff ◽  
Louis-Philippe Morency ◽  
Rosalind W. Picard

2021 ◽  
Author(s):  
Michael J Lyons

Twenty-five years ago, my colleagues Miyuki Kamachi and Jiro Gyoba and I designed and photographed JAFFE, a set of facial expression images intended for use in a study of face perception. In 2019, without seeking permission or informing us, Kate Crawford and Trevor Paglen exhibited JAFFE in two widely publicized art shows. In addition, they published a nonfactual account of the images in the essay “Excavating AI: The Politics of Images in Machine Learning Training Sets.” The present article recounts the creation of the JAFFE dataset and unravels each of Crawford and Paglen’s fallacious statements. I also discuss JAFFE more broadly in connection with research on facial expression, affective computing, and human-computer interaction.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4561 ◽  
Author(s):  
Jungryul Seo ◽  
Teemu H. Laine ◽  
Kyung-Ah Sohn

In recent years, affective computing has been actively researched to provide a higher level of emotion-awareness. Numerous studies have been conducted to detect the user’s emotions from physiological data. Among a myriad of target emotions, boredom, in particular, has been suggested to cause not only medical issues but also challenges in various facets of daily life. However, to the best of our knowledge, no previous studies have used electroencephalography (EEG) and galvanic skin response (GSR) together for boredom classification, although these data have potential features for emotion classification. To investigate the combined effect of these features on boredom classification, we collected EEG and GSR data from 28 participants using off-the-shelf sensors. During data acquisition, we used a set of stimuli comprising a video clip designed to elicit boredom and two other video clips of entertaining content. The collected samples were labeled based on the participants’ questionnaire-based testimonies on experienced boredom levels. Using the collected data, we initially trained 30 models with 19 machine learning algorithms and selected the top three candidate classifiers. After tuning the hyperparameters, we validated the final models through 1000 iterations of 10-fold cross validation to increase the robustness of the test results. Our results indicated that a Multilayer Perceptron model performed the best with a mean accuracy of 79.98% (AUC: 0.781). It also revealed the correlation between boredom and the combined features of EEG and GSR. These results can be useful for building accurate affective computing systems and understanding the physiological properties of boredom.


Sign in / Sign up

Export Citation Format

Share Document