Fuzzy sets and systems in building closed-loop affective computing systems for human-computer interaction: Advances and new research directions

Author(s):  
Dongrui Wu
2021 ◽  
Vol 2 (02) ◽  
pp. 52-58
Author(s):  
Sharmeen M.Saleem Abdullah Abdullah ◽  
Siddeeq Y. Ameen Ameen ◽  
Mohammed Mohammed sadeeq ◽  
Subhi Zeebaree

New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.


Author(s):  
Lesley Axelrod ◽  
Kate Hone

In a culture which places increasing emphasis on happiness and wellbeing, multimedia technologies include emotional design to improve commercial edge. This chapter explores affective computing and illustrates how innovative technologies are capable of emotional recognition and display. Research in this domain has emphasised solving the technical difficulties involved, through the design of ever more complex recognition algorithms. But fundamental questions about the use of such technology remain neglected. Can it really improve human-computer interaction? For which types of application is it suitable? How is it best implemented? What ethical considerations are there? We review this field and discuss the need for user-centred design. We describe and give evidence from a study that explores some of the user issues in affective computing.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2308 ◽  
Author(s):  
Dilana Hazer-Rau ◽  
Sascha Meudt ◽  
Andreas Daucher ◽  
Jennifer Spohrs ◽  
Holger Hoffmann ◽  
...  

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


Author(s):  
Xiaojuan Ma

Engagement, the key construct that describes the synergy between human (users) and technology (computing systems), is gaining increasing attention in academia and industry. Human-Engaged AI (HEAI) is an emerging research paradigm that aims to jointly advance the capability and capacity of human and AI technology. In this paper, we first review the key concepts in HEAI and its driving force from the integration of Artificial Intelligence (AI) and Human-Computer Interaction (HCI). Then we present an HEAI framework developed from our own work.


2021 ◽  
Vol 12 (1) ◽  
pp. 1-20
Author(s):  
Franklin M. da C. Lima ◽  
Gabriel A. M. Vasiljevic ◽  
Leonardo Cunha De Miranda ◽  
M. Cecília C. Baranauskas

Analyzing how the conferences of a given research field are evolving contributes to the academic community in that the researchers can better situate their research towards the advancement of knowledge in their area of expertise. Thus, in this work we present the results of a correlation analysis performed within and between-conferences of the field of Human-Computer Interaction, using data from the conference on Human-Computer Interaction International (HCII) and from the Brazilian Symposium on Human Factors in Computing Systems (IHC). More than 209 thousand words from the titles of over 18 thousand publications from both conferences were analyzed in total, using different quantitative, qualitative and visualization methods, including statistical tests. The analysis of words from the tiles of publications from both conferences and the comparison of the ranking of these words indicate, amongst other results, that there is a significant difference in relation to the main and most covered topics for each one of these conferences. 


RENOTE ◽  
2009 ◽  
Vol 7 (3) ◽  
pp. 390-400
Author(s):  
Maria Augusta Silveira Netto Nunes

This paper describes how human psychological aspects have been used in lifelike synthetic agents in order to provide believability during the human-computer interaction. We describe a brief survey of applications where Affective Computing Scientists have applied psychological aspects, like Emotion and Personality. Based on those aspects we describe the effort done by Affective Computing scientists in order to create a Markup Language to express and standardize Emotions. Because they have not yet concentrated their effort on Personality, here, we propose a starting point to create a Markup Language to express Personality.


2021 ◽  
Author(s):  
Michael J Lyons

Twenty-five years ago, my colleagues Miyuki Kamachi and Jiro Gyoba and I designed and photographed JAFFE, a set of facial expression images intended for use in a study of face perception. In 2019, without seeking permission or informing us, Kate Crawford and Trevor Paglen exhibited JAFFE in two widely publicized art shows. In addition, they published a nonfactual account of the images in the essay “Excavating AI: The Politics of Images in Machine Learning Training Sets.” The present article recounts the creation of the JAFFE dataset and unravels each of Crawford and Paglen’s fallacious statements. I also discuss JAFFE more broadly in connection with research on facial expression, affective computing, and human-computer interaction.


Sign in / Sign up

Export Citation Format

Share Document