scholarly journals Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Javier Marín-Morales ◽  
Juan Luis Higuera-Trujillo ◽  
Alberto Greco ◽  
Jaime Guixeres ◽  
Carmen Llinares ◽  
...  
Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5163
Author(s):  
Javier Marín-Morales ◽  
Carmen Llinares ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2021 ◽  
pp. 100432
Author(s):  
C.N.W. Geraets ◽  
S. Klein Tuente ◽  
B.P. Lestestuiver ◽  
M. van Beilen ◽  
S.A. Nijman ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-6 ◽  
Author(s):  
Jianjun Cui ◽  
Shih-Ching Yeh ◽  
Si-Huei Lee

Frozen shoulder is a common clinical shoulder condition. Measuring the degree of shoulder joint movement is crucial to the rehabilitation process. Such measurements can be used to evaluate the severity of patients’ condition, establish rehabilitation goals and appropriate activity difficulty levels, and understand the effects of rehabilitation. Currently, measurements of the shoulder joint movement degree are typically conducted by therapists using a protractor. However, along with the growth of telerehabilitation, measuring the shoulder joint mobility on patients’ own at home will be needed. In this study, wireless inertial sensors were combined with the virtual reality interactive technology to provide an innovative shoulder joint mobility self-measurement system that can enable patients to measure their performance of four shoulder joint movements on their own at home. Pilot clinical trials were conducted with 25 patients to confirm the feasibility of the system. In addition, the results of correlation and differential analyses compared with the results of traditional measurement methods exhibited a high correlation, verifying the accuracy of the proposed system. Moreover, according to interviews with patients, they are confident in their ability to measure shoulder joint mobility themselves.


2021 ◽  
Vol 25 (5) ◽  
pp. 31-40
Author(s):  
E. V. Romanova ◽  
L. V. Kurzaeva ◽  
L. Z. Davletkireeva ◽  
T. B. Novikova

The rapid development of virtual and augmented reality technologies is currently taking place in almost all spheres of activity. Elements of virtual and augmented reality are used in such areas as education, medicine, transport, gaming, tourism and others. The active spread of these technologies causes the emergence of special competencies in the IT labor market and, as a result, the formation of new professions.Many Russian universities are training students in IT training areas. Specialization in the development of computer games and virtual reality applications has begun recently. The provision of practical classes is accompanied by specific tasks, which gives students the opportunity to improve the use of software and technical devices.The relevance of the research is determined by the current demand for the use of the latest technologies by IT developers in the field of creating computer games. Today, technologies that provide a player’s immersion in virtual reality are becoming more and more popular. One of these technologies is a suit with wearable sensors that track a person’s position in space in real time. However, there are quite a few real described projects in the literature and on the Internet. This study examines the process of developing a task for creating a game application using virtual reality technology: a suit with wearable sensors for teaching students.Materials and methods of research. Timely identification of the needs of the IT market in personnel training allows educational organizations to form new training programs of different levels of training. This approach makes it possible to target the educational and methodological materials being developed to use the latest achievements in the development of the field under study.Using a systematic approach, the study characterizes virtual reality suits and sensors for monitoring the position in the user’s space. Thus, the goal of the task was to ensure the immersiveness and convenience of interaction between the player and the game environment.Based on materials on software, position sensors in space, the approach of pedagogical design was applied and the procedure was formed for a practical task, reflecting the relevant competencies.Results. The study was conducted on the basis in the framework of laboratory and practical work of students, as well as at a real enterprise. Training in the new profile of the direction of training “Applied informatics” is fully equipped with all the latest technologies in this field. As a result of the work, the content of the practical task was developed.Real development of virtual and augmented reality applications is conducted jointly with students. Almost all projects used a suit with body sensors.Conclusion. Our study examines in detail the process of developing an application using a suit with wearable sensors for further training of students. Based on the results, work can be carried out on real projects for any field. Based on the research materials, it is planned to issue a textbook for students with the profile of developing computer games and virtual / augmented reality applications.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


Sign in / Sign up

Export Citation Format

Share Document