scholarly journals Human Emotion Recognition: Review of Sensors and Methods

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 592 ◽  
Author(s):  
Andrius Dzedzickis ◽  
Artūras Kaklauskas ◽  
Vytautas Bucinskas

Automated emotion recognition (AEE) is an important issue in various fields of activities which use human emotional reactions as a signal for marketing, technical equipment, or human–robot interaction. This paper analyzes scientific research and technical papers for sensor use analysis, among various methods implemented or researched. This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity. The results of the analysis performed in this paper present applicable methods for each type of emotion and their intensity and propose their classification. The classification of emotion sensors is presented to reveal area of application and expected outcomes from each method, as well as their limitations. This paper should be relevant for researchers using human emotion evaluation and analysis, when there is a need to choose a proper method for their purposes or to find alternative decisions. Based on the analyzed human emotion recognition sensors and methods, we developed some practical applications for humanizing the Internet of Things (IoT) and affective computing systems.

Author(s):  
Pyotr Nikolaevich Sovietov

Specialized processors programmable in domain-specific languages are increasingly used in modern computing systems. The compiler-in-the-loop approach, based on the joint development of a specialized processor and a compiler, is gaining popularity. At the same time, the traditional tools, like GCC and LLVM, are insufficient for the agile development of optimizing compilers that generate target code of an exotic, irregular architecture with static parallelism of operations. The article proposes methods from the field of program synthesis for the implementation of machine-dependent compilation phases. The phases are based on a reduction to SMT problem which allows to get rid of heuristic and approximate approaches, that requires complex software implementation of a compiler. In particular, a synthesis of machine-dependent optimization rules, instruction selection and instruction scheduling combined with register allocation are implemented with help of SMT solver. Practical applications of the developed methods and algorithms are illustrated by the example of a compiler for a specialized processor with an instruction set that accelerates the implementation of lightweight cryptography algorithms in the Internet of Things. The results of compilation and simulation of 8 cryptographic primitives for 3 variants of specialized processor (CISC-like, VLIW-like and a variant with delayed load instruction) show the vitality of the proposed approach.


2020 ◽  
Vol 7 ◽  
Author(s):  
Matteo Spezialetti ◽  
Giuseppe Placidi ◽  
Silvia Rossi

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.


2021 ◽  
Vol 11 (11) ◽  
pp. 1392
Author(s):  
Yue Hua ◽  
Xiaolong Zhong ◽  
Bingxue Zhang ◽  
Zhong Yin ◽  
Jianhua Zhang

Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.


2021 ◽  
Author(s):  
Puja A. Chavan ◽  
Sharmishta Desai

Emotion awareness is one of the most important subjects in the field of affective computing. Using nonverbal behavioral methods such as recognition of facial expression, verbal behavioral method, recognition of speech emotion, or physiological signals-based methods such as recognition of emotions based on electroencephalogram (EEG) can predict human emotion. However, it is notable that data obtained from either nonverbal or verbal behaviors are indirect emotional signals suggesting brain activity. Unlike the nonverbal or verbal actions, EEG signals are reported directly from the human brain cortex and thus may be more effective in representing the inner emotional states of the brain. Consequently, when used to measure human emotion, the use of EEG data can be more accurate than data on behavior. For this reason, the identification of human emotion from EEG signals has become a very important research subject in current emotional brain-computer interfaces (BCIs) aimed at inferring human emotional states based on the EEG signals recorded. In this paper, a hybrid deep learning approach has proposed using CNN and a long short-term memory (LSTM) algorithm is investigated for the purpose of automatic classification of epileptic disease from EEG signals. The signals have been processed by CNN for feature extraction from runtime environment while LSTM has used for classification of entire data. Finally, system demonstrates each EEG data file as normal or epileptic disease. In this research to describes a state of art for effective epileptic disease detection prediction and classification using hybrid deep learning algorithms. This research demonstrates a collaboration of CNN and LSTM for entire classification of EEG signals in numerous existing systems.


Author(s):  
Maulin Patel ◽  
Manisha Patel

For a computer, identification of human emotion from a still image of the human face is a complex, challenging, and heavily calculative task. Classification of human emotion is done by using a different combination of convolutional neural networks (CNN) that task is known as Facial Emotion Recognition (FER). CNN model is achieved by training and testing on lots of same categorical images from the dataset using different hyperparameter tuning. The main contribution of this work is to look for various CNN architectures, hyperparameter tuning and compare the performance of those CNN models based on accuracy and loss while training and testing on Facial Emotion Recognition. This study shall help to provide a guide for the selection of an appropriate CNN model and tuning parameter according to the needs of the applicant.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1659 ◽  
Author(s):  
Fadi Al Machot ◽  
Ali Elmachot ◽  
Mouhannad Ali ◽  
Elyan Al Machot ◽  
Kyandoghere Kyamakya

One of the main objectives of Active and Assisted Living (AAL) environments is to ensure that elderly and/or disabled people perform/live well in their immediate environments; this can be monitored by among others the recognition of emotions based on non-highly intrusive sensors such as Electrodermal Activity (EDA) sensors. However, designing a learning system or building a machine-learning model to recognize human emotions while training the system on a specific group of persons and testing the system on a totally a new group of persons is still a serious challenge in the field, as it is possible that the second testing group of persons may have different emotion patterns. Accordingly, the purpose of this paper is to contribute to the field of human emotion recognition by proposing a Convolutional Neural Network (CNN) architecture which ensures promising robustness-related results for both subject-dependent and subject-independent human emotion recognition. The CNN model has been trained using a grid search technique which is a model hyperparameter optimization technique to fine-tune the parameters of the proposed CNN architecture. The overall concept’s performance is validated and stress-tested by using MAHNOB and DEAP datasets. The results demonstrate a promising robustness improvement regarding various evaluation metrics. We could increase the accuracy for subject-independent classification to 78% and 82% for MAHNOB and DEAP respectively and to 81% and 85% subject-dependent classification for MAHNOB and DEAP respectively (4 classes/labels). The work shows clearly that while using solely the non-intrusive EDA sensors a robust classification of human emotion is possible even without involving additional/other physiological signals.


Sign in / Sign up

Export Citation Format

Share Document