VREED

Author(s):  
Luma Tabbaa ◽  
Ryan Searle ◽  
Saber Mirzaee Bafti ◽  
Md Moinul Hossain ◽  
Jittrapol Intarasisrisawat ◽  
...  

The paper introduces a multimodal affective dataset named VREED (VR Eyes: Emotions Dataset) in which emotions were triggered using immersive 360° Video-Based Virtual Environments (360-VEs) delivered via Virtual Reality (VR) headset. Behavioural (eye tracking) and physiological signals (Electrocardiogram (ECG) and Galvanic Skin Response (GSR)) were captured, together with self-reported responses, from healthy participants (n=34) experiencing 360-VEs (n=12, 1--3 min each) selected through focus groups and a pilot trial. Statistical analysis confirmed the validity of the selected 360-VEs in eliciting the desired emotions. Preliminary machine learning analysis was carried out, demonstrating state-of-the-art performance reported in affective computing literature using non-immersive modalities. VREED is among the first multimodal VR datasets in emotion recognition using behavioural and physiological signals. VREED is made publicly available on Kaggle1. We hope that this contribution encourages other researchers to utilise VREED further to understand emotional responses in VR and ultimately enhance VR experiences design in applications where emotional elicitation plays a key role, i.e. healthcare, gaming, education, etc.

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4520 ◽  
Author(s):  
Uria-Rivas ◽  
Rodriguez-Sanchez ◽  
Santos ◽  
Vaquero ◽  
Boticario

Physiological sensors can be used to detect changes in the emotional state of users with affective computing. This has lately been applied in the educational domain, aimed to better support learners during the learning process. For this purpose, we have developed the AICARP (Ambient Intelligence Context-aware Affective Recommender Platform) infrastructure, which detects changes in the emotional state of the user and provides personalized multisensorial support to help manage the emotional state by taking advantage of ambient intelligence features. We have developed a third version of this infrastructure, AICARP.V3, which addresses several problems detected in the data acquisition stage of the second version, (i.e., intrusion of the pulse sensor, poor resolution and low signal to noise ratio in the galvanic skin response sensor and slow response time of the temperature sensor) and extends the capabilities to integrate new actuators. This improved incorporates a new acquisition platform (shield) called PhyAS (Physiological Acquisition Shield), which reduces the number of control units to only one, and supports both gathering physiological signals with better precision and delivering multisensory feedback with more flexibility, by means of new actuators that can be added/discarded on top of just that single shield. The improvements in the quality of the acquired signals allow better recognition of the emotional states. Thereof, AICARP.V3 gives a more accurate personalized emotional support to the user, based on a rule-based approach that triggers multisensorial feedback, if necessary. This represents progress in solving an open problem: develop systems that perform as effectively as a human expert in a complex task such as the recognition of emotional states.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 770
Author(s):  
Alessandro Tonacci ◽  
Lucia Billeci ◽  
Irene Di Mambro ◽  
Roberto Marangoni ◽  
Chiara Sanmartin ◽  
...  

Wearable sensors are nowadays largely employed to assess physiological signals derived from the human body without representing a burden in terms of obtrusiveness. One of the most intriguing fields of application for such systems include the assessment of physiological responses to sensory stimuli. In this specific regard, it is not yet known which are the main psychophysiological drivers of olfactory-related pleasantness, as the current literature has demonstrated the relationship between odor familiarity and odor valence, but has not clarified the consequentiality between the two domains. Here, we enrolled a group of university students to whom olfactory training lasting 3 months was administered. Thanks to the analysis of electrocardiogram (ECG) and galvanic skin response (GSR) signals at the beginning and at the end of the training period, we observed different autonomic responses, with higher parasympathetically-mediated response at the end of the period with respect to the first evaluation. This possibly suggests that an increased familiarity to the proposed stimuli would lead to a higher tendency towards relaxation. Such results could suggest potential applications to other domains, including personalized treatments based on odors and foods in neuropsychiatric and eating disorders.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4037
Author(s):  
Aasim Raheel ◽  
Muhammad Majid ◽  
Majdi Alnowami ◽  
Syed Muhammad Anwar

Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57 % as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76 % (for four emotions) when interacting with tactile enhanced multimedia.


Author(s):  
M. Callejas-Cuervo ◽  
L.A. Martínez-Tejada ◽  
A.C. Alarcón-Aldana

This paper presents a system that allows for the identification of two values: arousal and valence, which represent the degree of stimulation in a subject, using Russell’s model of affect as a reference. To identify emotions, a step-by-step structure is used, which, based on statistical data from physiological signal metrics, generates the representative arousal value (direct correlation); from the PANAS questionnaire, the system generates the valence value (inverse correlation), as a first approximation to the techniques of emotion recognition without the use of artificial intelligence. The system gathers information concerning arousal activity from a subject using the following metrics: beats per minute (BPM), heart rate variability (HRV), the number of galvanic skin response (GSR) peaks in the skin conductance response (SCR) and forearm contraction time, using three physiological signals (Electrocardiogram - ECG, Galvanic Skin Response - GSR, Electromyography - EMG).


2017 ◽  
Author(s):  
Mohammadhossein Moghimi ◽  
Robert Stone ◽  
Pia Rotshtein

Detecting emotional responses in multimedia environments is an academically and technologically challenging research issue. In the domain of Affective Computing, from non-interactive and static stimuli (e.g. affective image) to highly interactive and dynamic environments (affective virtual realities), researchers have employed a wide range of affective stimuli to measure and interpret human psychological and physiological emotional behaviours. Various psychophysiological parameters (e.g. Electroencephalography, Galvanic Skin Response, Heart Rate, etc.) have been employed and investigated, in order to detect and quantify human affective states. In this paper, we present a detailed literature review of over 33 affective computing studies, undertaken since 1993. All aspects of these studies (stimuli type, pre-processing, windowing, features, classification technique, etc.) have been reported in detail. We believe that this paper not only summarises the breadth of research over the past 20 years, but also serves to clarify various significant aspects and details of this increasingly valuable and relevant research area.


Author(s):  
Justin J Sanders ◽  
Emma Caponigro ◽  
Jonathan D. Ericson ◽  
Manisha Dubey ◽  
Ja-Nae Duane ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2381
Author(s):  
Jaewon Lee ◽  
Hyeonjeong Lee ◽  
Miyoung Shin

Mental stress can lead to traffic accidents by reducing a driver’s concentration or increasing fatigue while driving. In recent years, demand for methods to detect drivers’ stress in advance to prevent dangerous situations increased. Thus, we propose a novel method for detecting driving stress using nonlinear representations of short-term (30 s or less) physiological signals for multimodal convolutional neural networks (CNNs). Specifically, from hand/foot galvanic skin response (HGSR, FGSR) and heart rate (HR) short-term input signals, first, we generate corresponding two-dimensional nonlinear representations called continuous recurrence plots (Cont-RPs). Second, from the Cont-RPs, we use multimodal CNNs to automatically extract FGSR, HGSR, and HR signal representative features that can effectively differentiate between stressed and relaxed states. Lastly, we concatenate the three extracted features into one integrated representation vector, which we feed to a fully connected layer to perform classification. For the evaluation, we use a public stress dataset collected from actual driving environments. Experimental results show that the proposed method demonstrates superior performance for 30-s signals, with an overall accuracy of 95.67%, an approximately 2.5–3% improvement compared with that of previous works. Additionally, for 10-s signals, the proposed method achieves 92.33% classification accuracy, which is similar to or better than the performance of other methods using long-term signals (over 100 s).


2020 ◽  
Vol 1 (2) ◽  
pp. 168-184
Author(s):  
Kai Huang ◽  
Elena Nicoladis

Some previous research has suggested that words in multlinguals’ first language, particularly taboo words, evoke a greater emotional response than words in any subsequent language. In the present study, we elicited French-English bilinguals’ emotional responses to words in both languages. We expected taboo words to evoke higher emotional response than positive or negative words in both languages. We tested the hypothesis that the earlier that bilinguals had acquired the language, the higher the emotional responses. French-English bilinguals with long exposure to both French and English participated. Their galvanic skin response (GSR) was measured as they processed positive (e.g., mother), negative (e.g., war) and taboo (e.g., pussy) words in both French and English. As predicted, GSR responses to taboo words were high in both languages. Surprisingly, English taboo words elicited higher GSR responses than French ones and age of acquisition was not related to GSR. We argue that these results are related to the context in which this study took place (i.e., an English majority context). If this interpretation is correct, then bilinguals’ emotional response to words could be more strongly linked to recent emotional interactions than to childhood experiences.


2017 ◽  
Vol 29 (05) ◽  
pp. 1750032 ◽  
Author(s):  
Ateke Goshvarpour ◽  
Atefeh Goshvarpour ◽  
Ataollah Abbasi

Sleep deprivation adversely affects the psychological and physiological functions, but little information about the effects of poor sleeping on the emotional responses of individuals are available. In the present study, the effect of one night insufficient sleep on the emotional responses to visual stimuli was appraised using electrocardiogram (ECG) measures. We also tested the hypothesis that men and women differ in affective response, considering the sleep quantity. ECG signals of university students (20 men and 20 women) were recorded while viewing affective pictures. Images were selected from the International Affective Picture System and categorized as happy, fear, sadness, and relax. The data were characterized using 14 time- and frequency-based features. The Wilcoxon statistical test was applied to examine significant differences between two groups of insufficient and normal sleep. A significant effect of ECG parameters was observed in deficient sleep on the emotional responses. Mode, RMS, and mean power show the largest effect size in all affective states. Among the affective states, the most significant differences between the two groups were perceived while watching sad and relaxed pictures. In addition, analysis of affective ECG measures in men and women distinctly revealed that lack of sleeping is more pronounced in men. Sleep is associated with the emotional responses. Our results confirmed the effect of gender on affective physiological reactions.


2014 ◽  
Vol 23 (3) ◽  
pp. 253-266 ◽  
Author(s):  
Daniele Leonardis ◽  
Antonio Frisoli ◽  
Michele Barsotti ◽  
Marcello Carrozzino ◽  
Massimo Bergamasco

This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements. In particular, we analyze the effect of combined vestibular and proprioceptive afferent signals on the perceived embodiment within an immersive walking scenario. These feedback signals were applied by means of a motion platform and by tendon vibration of lower limbs, evoking illusory leg movements. Vestibular and proprioceptive feedback were provided congruently with a rich virtual scenario reconstructing a real city, rendered on a head-mounted display (HMD). The sense of embodiment was evaluated through both self-reported questionnaires and physiological measurements in two experimental conditions: with all active sensory feedback (highly embodied condition), and with visual feedback only. Participants' self-reports show that the addition of both vestibular and proprioceptive feedback increases the sense of embodiment and the individual's feeling of presence associated with the walking experience. Furthermore, the embodiment condition significantly increased the measured galvanic skin response and respiration rate. The obtained results suggest that vestibular and proprioceptive feedback can improve the participant's sense of embodiment in the virtual experience.


Sign in / Sign up

Export Citation Format

Share Document