SCOUT and affective interaction design: Evaluating physiological signals for usability in emotional processing

Author(s):  
Nor'ain Mohd Yusoff ◽  
Siti Salwah Salim
Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3510 ◽  
Author(s):  
Gisela Pinto ◽  
João M. Carvalho ◽  
Filipa Barros ◽  
Sandra C. Soares ◽  
Armando J. Pinho ◽  
...  

Emotional responses are associated with distinct body alterations and are crucial to foster adaptive responses, well-being, and survival. Emotion identification may improve peoples’ emotion regulation strategies and interaction with multiple life contexts. Several studies have investigated emotion classification systems, but most of them are based on the analysis of only one, a few, or isolated physiological signals. Understanding how informative the individual signals are and how their combination works would allow to develop more cost-effective, informative, and objective systems for emotion detection, processing, and interpretation. In the present work, electrocardiogram, electromyogram, and electrodermal activity were processed in order to find a physiological model of emotions. Both a unimodal and a multimodal approach were used to analyze what signal, or combination of signals, may better describe an emotional response, using a sample of 55 healthy subjects. The method was divided in: (1) signal preprocessing; (2) feature extraction; (3) classification using random forest and neural networks. Results suggest that the electrocardiogram (ECG) signal is the most effective for emotion classification. Yet, the combination of all signals provides the best emotion identification performance, with all signals providing crucial information for the system. This physiological model of emotions has important research and clinical implications, by providing valuable information about the value and weight of physiological signals for emotional classification, which can critically drive effective evaluation, monitoring and intervention, regarding emotional processing and regulation, considering multiple contexts.


2007 ◽  
Vol 40 (16) ◽  
pp. 398-402
Author(s):  
In Ki Kim ◽  
Hyungsup Kim ◽  
Cheol Lee ◽  
Woojin Chang ◽  
Myung Hwan Yun

Work ◽  
2012 ◽  
Vol 41 ◽  
pp. 5057-5061 ◽  
Author(s):  
Dimitrios Gkouskos ◽  
Fang Chen

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Cheul Young Park ◽  
Narae Cha ◽  
Soowon Kang ◽  
Auk Kim ◽  
Ahsan Habib Khandoker ◽  
...  

Abstract Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.


2018 ◽  
Vol 30 (4) ◽  
pp. 196-206 ◽  
Author(s):  
Byungho Park ◽  
Rachel L. Bailey

Abstract. In an effort to quantify message complexity in such a way that predictions regarding the moment-to-moment cognitive and emotional processing of viewers would be made, Lang and her colleagues devised the coding system information introduced (or ii). This coding system quantifies the number of structural features that are known to consume cognitive resources and considers it in combination with the number of camera changes (cc) in the video, which supply additional cognitive resources owing to their elicitation of an orienting response. This study further validates ii using psychophysiological responses that index cognitive resource allocation and recognition memory. We also pose two novel hypotheses regarding the confluence of controlled and automatic processing and the effect of cognitive overload on enjoyment of messages. Thirty television advertisements were selected from a pool of 172 (all 20 s in length) based on their ii/cc ratio and ratings for their arousing content. Heart rate change over time showed significant deceleration (indicative of increased cognitive resource allocation) for messages with greater ii/cc ratios. Further, recognition memory worsened as ii/cc increased. It was also found that message complexity increases both automatic and controlled allocations to processing, and that the most complex messages may have created a state of cognitive overload, which was received as enjoyable by the participants in this television context.


PsycCRITIQUES ◽  
2006 ◽  
Vol 51 (22) ◽  
Author(s):  
Jonathan S. Abramowitz ◽  
Elizabeth L. Moore
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document