scholarly journals EEG-Based BCI Emotion Recognition: A Survey

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5083 ◽  
Author(s):  
Edgar P. Torres ◽  
Edgar A. Torres ◽  
Myriam Hernández-Álvarez ◽  
Sang Guun Yoo

Affecting computing is an artificial intelligence area of study that recognizes, interprets, processes, and simulates human affects. The user’s emotional states can be sensed through electroencephalography (EEG)-based Brain Computer Interfaces (BCI) devices. Research in emotion recognition using these tools is a rapidly growing field with multiple inter-disciplinary applications. This article performs a survey of the pertinent scientific literature from 2015 to 2020. It presents trends and a comparative analysis of algorithm applications in new implementations from a computer science perspective. Our survey gives an overview of datasets, emotion elicitation methods, feature extraction and selection, classification algorithms, and performance evaluation. Lastly, we provide insights for future developments.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4496
Author(s):  
Vlad Pandelea ◽  
Edoardo Ragusa ◽  
Tommaso Apicella ◽  
Paolo Gastaldo ◽  
Erik Cambria

Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of large transformers, as high-quality feature extractors, and simple hardware-friendly classifiers based on linear separators can achieve competitive performance while allowing real-time inference and fast training. Various solutions including batch and Online Sequential Learning are analyzed. Additionally, our experiments show that latency and performance can be further improved via dimensionality reduction and pre-training, respectively. The resulting system is implemented on two types of edge device, namely an edge accelerator and two smartphones.



2021 ◽  
Vol 11 (2) ◽  
pp. 128
Author(s):  
Sergej Lackmann ◽  
Pierre-Majorique Léger ◽  
Patrick Charland ◽  
Caroline Aubé ◽  
Jean Talbot

Millions of students follow online classes which are delivered in video format. Several studies examine the impact of these video formats on engagement and learning using explicit measures and outline the need to also investigate the implicit cognitive and emotional states of online learners. Our study compared two video formats in terms of engagement (over time) and learning in a between-subject experiment. Engagement was operationalized using explicit and implicit neurophysiological measures. Twenty-six (26) subjects participated in the study and were randomly assigned to one of two conditions based on the video shown: infographic video or lecture capture. The infographic video showed animated graphics, images, and text. The lecture capture showed a professor, providing a lecture, filmed in a classroom setting. Results suggest that lecture capture triggers greater emotional engagement over a shorter period, whereas the infographic video maintains higher emotional and cognitive engagement over longer periods of time. Regarding student learning, the infographic video contributes to significantly improved performance in matters of difficult questions. Additionally, our results suggest a significant relationship between engagement and student performance. In general, the higher the engagement, the better the student performance, although, in the case of cognitive engagement, the link is quadratic (inverted U shaped).



Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 560
Author(s):  
Andrea Bonci ◽  
Simone Fiori ◽  
Hiroshi Higashi ◽  
Toshihisa Tanaka ◽  
Federica Verdini

The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, filtering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.



1982 ◽  
Vol 26 (5) ◽  
pp. 435-435
Author(s):  
Dennis B. Beringer ◽  
Susan R. Maxwell

Interest in optimized human-computer interfaces has resulted in the development of a number of interesting devices that allow the computer and human operator to interact through a common drawing surface. These devices include the lightpen, lightgun (Goodwin, 1975), and a variety of touch-sensitive display overlay devices. Although touch devices were being investigated as early as 1965 (Orr and Hopkin, circa 1966), behavioral and performance data are scarce in relation to other sources of human-machine interface data. Availability of these devices has increased in the last 10 years and it is now possible to retrofit such devices to a wide variety of video display terminals at a reasonable cost. With the possibility of increased use looming on the horizon, it would be quite useful to examine the ergonomics of such devices and the behavioral adaptation or maladaptation that occurs for each user. Performance data available at this point from previous studies suggests that some positive increments in performance can be expected for graphic-based tasks while no serious decrements should be expected for discrete data entry tasks (Beringer, 1980; Stammers and Bird, 1980). The performance gains expected from this format of interaction are not to be won without some sacrifice elsewhere, however. Positioning of the display surface for optimum viewing may cause serious operator fatigue problems after extended use of the device if the device is to be used with relatively high frequency. The relationship of device positioning, device sensing resolution, and task type are being examined as they contribute to the comission of errors and the onset of fatigue. Experimentation was planned to examine how positioning of the device, or what can truly be called a “control/display unit”, affected the performance of visual discrimination tasks and manual designation tasks. Initial investigations used a single task to examine these questions by requiring the operator/subject to visually detect and manually designate the location of a break in one of 54 circles presented on a color c.r.t. display (essentially a Landholt C target). Responses were accepted by an infrared touch panel mounted on the display face. The c.r.t. was placed at four declinations during the blocks of trials; 90, 67, 45, and 35 degrees to the line of sight. Although a very strong learning effect was observed over the first 8 blocks of 25 trials each, performance leveled off, on the average, beginning with the ninth block of trials. No reliable effects of screen declination were found in the examination of response times or number of errors. Responses did tend to be located slightly lower than the target, however, for the greater declinations of the display surface. Subjective reports of physical difficulty of responding and fatigue did vary regularly with declination of the display. The relatively high location of the device resulted in shoulder and arm fatigue when the display was at 90 degrees and wrist fatigue when the display was at 35 degrees. Subsequent phases of the investigation will allow subjects to adjust parameters of height and declination (Brown and Schaum, 1980) and will use hand skin temperature and quantified postural information to assess the degree of fatigue incurred during device operation.



Author(s):  
Miao Cheng ◽  
Ah Chung Tsoi

As a general means of expression, audio analysis and recognition have attracted much attention for its wide applications in real-life world. Audio emotion recognition (AER) attempts to understand the emotional states of human with the given utterance signals, and has been studied abroad for its further development on friendly human–machine interfaces. Though there have been several the-state-of-the-arts auditory methods devised to audio recognition, most of them focus on discriminative usage of acoustic features, while feedback efficiency of recognition demands is ignored. This makes possible application of AER, and rapid learning of emotion patterns is desired. In order to make predication of audio emotion possible, the speaker-dependent patterns of audio emotions are learned with multiresolution analysis, and fractal dimension (FD) features are calculated for acoustic feature extraction. Furthermore, it is able to efficiently learn the intrinsic characteristics of auditory emotions, while the utterance features are learned from FDs of each sub-band. Experimental results show the proposed method is able to provide comparative performance for AER.



2016 ◽  
Vol 13 (1) ◽  
pp. 29
Author(s):  
Nooraslinda Abdul Aris ◽  
Rohana Othman ◽  
Safawi Abdul Rahman ◽  
Marziana Madah Marzuki ◽  
Wan Mohd Yusof Wan Chik

Numerous ethical breaches have made cooperatives, and their management endures pressure to improve their image and ethical performance. One of the ways identified is by adopting ethical codes. Codes are regarded as instruments to enhance social responsibility that contain open guidelines describing desirable behavior and closed guidelines prohibiting certain behaviors. Codes clarify the norms and values organizations seek to uphold. Regarded as social enterprises, cooperatives are compelled to scale up their operations for sustainability reasons towards positive contribution to the national economy. Past research on ethical codes affecting behavior, attitude and performance has yielded mixed results. In addition, there are very few studies devoted to cooperatives. This paper presents the results of scientific literature, highlighting how ethical codes and sustainability may progressively improve the cooperatives’ image and reputation in the eyes of the stakeholders.     Keywords: ethical codes, social responsibility, sustainability, cooperative



2020 ◽  
Vol 15 (2) ◽  
Author(s):  
Evelina De Longis ◽  
Guido Alessandri

Emotion dynamics, how people’s emotions fluctuate across time, represent a key source of information about people’s psychological functioning and well-being. Investigating emotion dynamics in the workplace is particularly relevant, as affective experiences are intimately connected to organizational behavior and effectiveness. In this study, we examined the moderating role of emotional inertia in the dynamic association between both positive and negative emotions and self-rated job performance among a sample of 120 Italian workers (average age 41.4, SD = 14), which were prompted six times per day, for five working days. Emotional inertia refers to the extent that emotional states are self-predictive or carry on over time and is measured in terms of the autocorrelation of emotional states across time. Although inertia has been linked to several indicators of maladjustment, little is known about its correlates in terms of organizational behavior. Findings revealed that workers reporting high levels of positive emotions and high inertia rated their performance lower than workers high in positive emotions, but low in inertia. In contrast, the relation between negative emotions and performance was not significant for either high levels of inertia or low levels of inertia. Taken together, these results suggest the relevance of investigating the temporal dependency of emotional states at work.



PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258089
Author(s):  
Amelie M. Hübner ◽  
Ima Trempler ◽  
Corinna Gietmann ◽  
Ricarda I. Schubotz

Emotional sensations and inferring another’s emotional states have been suggested to depend on predictive models of the causes of bodily sensations, so-called interoceptive inferences. In this framework, higher sensibility for interoceptive changes (IS) reflects higher precision of interoceptive signals. The present study examined the link between IS and emotion recognition, testing whether individuals with higher IS recognize others’ emotions more easily and are more sensitive to learn from biased probabilities of emotional expressions. We recorded skin conductance responses (SCRs) from forty-six healthy volunteers performing a speeded-response task, which required them to indicate whether a neutral facial expression dynamically turned into a happy or fearful expression. Moreover, varying probabilities of emotional expressions by their block-wise base rate aimed to generate a bias for the more frequently encountered emotion. As a result, we found that individuals with higher IS showed lower thresholds for emotion recognition, reflected in decreased reaction times for emotional expressions especially of high intensity. Moreover, individuals with increased IS benefited more from a biased probability of an emotion, reflected in decreased reaction times for expected emotions. Lastly, weak evidence supporting a differential modulation of SCR by IS as a function of varying probabilities was found. Our results indicate that higher interoceptive sensibility facilitates the recognition of emotional changes and is accompanied by a more precise adaptation to emotion probabilities.



2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.



Sign in / Sign up

Export Citation Format

Share Document