scholarly journals Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4037
Author(s):  
Aasim Raheel ◽  
Muhammad Majid ◽  
Majdi Alnowami ◽  
Syed Muhammad Anwar

Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57 % as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76 % (for four emotions) when interacting with tactile enhanced multimedia.

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4520 ◽  
Author(s):  
Uria-Rivas ◽  
Rodriguez-Sanchez ◽  
Santos ◽  
Vaquero ◽  
Boticario

Physiological sensors can be used to detect changes in the emotional state of users with affective computing. This has lately been applied in the educational domain, aimed to better support learners during the learning process. For this purpose, we have developed the AICARP (Ambient Intelligence Context-aware Affective Recommender Platform) infrastructure, which detects changes in the emotional state of the user and provides personalized multisensorial support to help manage the emotional state by taking advantage of ambient intelligence features. We have developed a third version of this infrastructure, AICARP.V3, which addresses several problems detected in the data acquisition stage of the second version, (i.e., intrusion of the pulse sensor, poor resolution and low signal to noise ratio in the galvanic skin response sensor and slow response time of the temperature sensor) and extends the capabilities to integrate new actuators. This improved incorporates a new acquisition platform (shield) called PhyAS (Physiological Acquisition Shield), which reduces the number of control units to only one, and supports both gathering physiological signals with better precision and delivering multisensory feedback with more flexibility, by means of new actuators that can be added/discarded on top of just that single shield. The improvements in the quality of the acquired signals allow better recognition of the emotional states. Thereof, AICARP.V3 gives a more accurate personalized emotional support to the user, based on a rule-based approach that triggers multisensorial feedback, if necessary. This represents progress in solving an open problem: develop systems that perform as effectively as a human expert in a complex task such as the recognition of emotional states.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Aaron Frederick Bulagang ◽  
James Mountstephens ◽  
Jason Teo

Abstract Background Emotion prediction is a method that recognizes the human emotion derived from the subject’s psychological data. The problem in question is the limited use of heart rate (HR) as the prediction feature through the use of common classifiers such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Random Forest (RF) in emotion prediction. This paper aims to investigate whether HR signals can be utilized to classify four-class emotions using the emotion model from Russell’s in a virtual reality (VR) environment using machine learning. Method An experiment was conducted using the Empatica E4 wristband to acquire the participant’s HR, a VR headset as the display device for participants to view the 360° emotional videos, and the Empatica E4 real-time application was used during the experiment to extract and process the participant's recorded heart rate. Findings For intra-subject classification, all three classifiers SVM, KNN, and RF achieved 100% as the highest accuracy while inter-subject classification achieved 46.7% for SVM, 42.9% for KNN and 43.3% for RF. Conclusion The results demonstrate the potential of SVM, KNN and RF classifiers to classify HR as a feature to be used in emotion prediction in four distinct emotion classes in a virtual reality environment. The potential applications include interactive gaming, affective entertainment, and VR health rehabilitation.


2021 ◽  
Vol 10 (1) ◽  
pp. 15-22
Author(s):  
Eko Budi Setiawan ◽  
Al Ghani Iqbal Dzulfiqar

This research was conducted to facilitate the interaction between radio broadcasters and radio listeners during the song request process.  This research was triggered by the difficulty of the broadcasters in monitoring song requests from listeners. The system is made to accommodate all song requests by listeners. The application produced in this study uses speech emotion recognition technology based on a person's mood obtained from the spoken words.  This technology can change the voice into one of the mood categories: neutral, angry, sad, and afraid.  The k-Nearest Neighbor method is used to get recommendations for recommended song titles by looking for the closeness of the value between the listener's mood and the availability of song playlists. kNN is used because this method is suitable for user-based collaborative problems. kNN will recommend three songs which then be offered to listeners by broadcasters. Based on tests conducted to the broadcasters and radio listeners, this study has produced a song request application by recommending song titles according to the listener's mood,  the text message, the searching songs, and the song requests and the song details that have been requested. Functional test that has been carried out has received 100 because all test components have succeeded as expected.


2020 ◽  
Author(s):  
aras Masood Ismael ◽  
Ömer F Alçin ◽  
Karmand H Abdalla ◽  
Abdulkadir k sengur

Abstract In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG based emotion classification. Emotion recognition is important for human-machine interactions. Facial-features and body-gestures based approaches have been generally proposed for emotion recognition. Recently, EEG based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG based emotion classification.


Author(s):  
Luma Tabbaa ◽  
Ryan Searle ◽  
Saber Mirzaee Bafti ◽  
Md Moinul Hossain ◽  
Jittrapol Intarasisrisawat ◽  
...  

The paper introduces a multimodal affective dataset named VREED (VR Eyes: Emotions Dataset) in which emotions were triggered using immersive 360° Video-Based Virtual Environments (360-VEs) delivered via Virtual Reality (VR) headset. Behavioural (eye tracking) and physiological signals (Electrocardiogram (ECG) and Galvanic Skin Response (GSR)) were captured, together with self-reported responses, from healthy participants (n=34) experiencing 360-VEs (n=12, 1--3 min each) selected through focus groups and a pilot trial. Statistical analysis confirmed the validity of the selected 360-VEs in eliciting the desired emotions. Preliminary machine learning analysis was carried out, demonstrating state-of-the-art performance reported in affective computing literature using non-immersive modalities. VREED is among the first multimodal VR datasets in emotion recognition using behavioural and physiological signals. VREED is made publicly available on Kaggle1. We hope that this contribution encourages other researchers to utilise VREED further to understand emotional responses in VR and ultimately enhance VR experiences design in applications where emotional elicitation plays a key role, i.e. healthcare, gaming, education, etc.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4448 ◽  
Author(s):  
Günther Sagl ◽  
Bernd Resch ◽  
Andreas Petutschnig ◽  
Kalliopi Kyriakou ◽  
Michael Liedlgruber ◽  
...  

Wearable sensors are increasingly used in research, as well as for personal and private purposes. A variety of scientific studies are based on physiological measurements from such rather low-cost wearables. That said, how accurate are such measurements compared to measurements from well-calibrated, high-quality laboratory equipment used in psychological and medical research? The answer to this question, undoubtedly impacts the reliability of a study’s results. In this paper, we demonstrate an approach to quantify the accuracy of low-cost wearables in comparison to high-quality laboratory sensors. We therefore developed a benchmark framework for physiological sensors that covers the entire workflow from sensor data acquisition to the computation and interpretation of diverse correlation and similarity metrics. We evaluated this framework based on a study with 18 participants. Each participant was equipped with one high-quality laboratory sensor and two wearables. These three sensors simultaneously measured the physiological parameters such as heart rate and galvanic skin response, while the participant was cycling on an ergometer following a predefined routine. The results of our benchmarking show that cardiovascular parameters (heart rate, inter-beat interval, heart rate variability) yield very high correlations and similarities. Measurement of galvanic skin response, which is a more delicate undertaking, resulted in lower, but still reasonable correlations and similarities. We conclude that the benchmarked wearables provide physiological measurements such as heart rate and inter-beat interval with an accuracy close to that of the professional high-end sensor, but the accuracy varies more for other parameters, such as galvanic skin response.


2019 ◽  
Vol 2 ◽  
pp. 1-8
Author(s):  
Kalliopi Kyriakou ◽  
Bernd Resch

Abstract. Over the last years, we have witnessed an increasing interest in urban health research using physiological sensors. There is a rich repertoire of methods for stress detection using various physiological signals and algorithms. However, most of the studies focus mainly on the analysis of the physiological signals and disregard the spatial analysis of the extracted geo-located emotions. Methodologically, the use of hotspot maps created through point density analysis dominates in previous studies, but this method may lead to inaccurate or misleading detection of high-intensity stress clusters. This paper proposes a methodology for the spatial analysis of moments of stress (MOS). In a first step, MOS are identified through a rule-based algorithm analysing galvanic skin response and skin temperature measured by low-cost wearable physiological sensors. For the spatial analysis, we introduce a MOS ratio for the geo-located detected MOS. This ratio normalises the detected MOS in nearby areas over all the available records for the area. Then, the MOS ratio is fed into a hot spot analysis to identify hot and cold spots. To validate our methodology, we carried out two real-world field studies to evaluate the accuracy of our approach. We show that the proposed approach is able to identify spatial patterns in urban areas that correspond to self-reported stress.


Author(s):  
Bimo Sunarfri Hantono ◽  
◽  
Lukito Edi Nugroho ◽  
Paulus Insap Santosa ◽  
◽  
...  

Mental stress is an undesirable condition for everyone. Increased stress can cause many problems, such as depression, heart attacks, and strokes. Psychophysiological conditions possible use as a reference to a person’s mental state of stress. The development of mobile device technology, along with the accompanying sensors, can be used to measure the psychophysiological condition of its users. Heart rate allows measured from the photoplethysmography signal utilizing a smartphone or smartwatch. The heart rate variability is currently one of the most studied methods for assessing mental stress. Our objective is to analyze stress levels on the subjects when performing tasks on the smartphone. This study involved 41 students as respondents. Their heart rate was recorded using a smartphone while they were doing the n-back tasks. The n-back task is one of the performance tasks used to measure working memory and working memory capacity. In this study, the n-back task was also used as a stressor. The heart rate dataset and n-back task results are then processed and analyzed using machine learning to determine stress levels. Compared with three other algorithms (neural network, discriminant analysis, and naïve Bayes), the k-nearest neighbor algorithm is most appropriate to use in the classification of time and frequency domain analysis.


2020 ◽  
Vol 10 (3) ◽  
pp. 769-774
Author(s):  
Shiliang Shao ◽  
Ting Wang ◽  
Chunhe Song ◽  
Yun Su ◽  
Xingchi Chen ◽  
...  

In this paper, eight novel instantaneous indices of short-time heart rate variability (HRV) signals are proposed for prediction of cardiovascular and cerebrovascular events. The indices are based on Bubble Entropy (BE) and Singular Value Decompose (SVD). The process of indices calculation is as follows, firstly, the instantaneous amplitude (IA), instantaneous frequency (IF) and instantaneous phase (IP) of HRV signals are estimated by the Hilbert transform. Secondly, according to the HRV, IA, IP and IF, the BE and singular value (SV) is calculated, then eight novel indices are obtained, they are BEHRV, BEIA, BEIF, BEIP, SVHRV, SVIA, SVIF and SVIP. Last but not least, in order to evaluate the performance of the eight novel indices for prediction of cardiovascular and cerebrovascular events, the difference analysis of eight indices is carried out by t-test. According to the p value, seven of the eight indices BEHRV, BEIA, BEIF, BEIP, SVIA, SVIF and SVIP are thought to be the indices to discriminate the E group and N group. The K-nearest neighbor (KNN), support vector machine (SVM) and decision tree (DT) are applied on the seven novel indices. The results are that, seven novel indices are significantly different between the events and non-events groups, and the SVM classifier has the highest classification Acc and Spe for prediction of cardiovascular and cerebrovascular events, they are 88.31% and 90.19%, respectively.


Sign in / Sign up

Export Citation Format

Share Document