affective computing
Recently Published Documents


TOTAL DOCUMENTS

701
(FIVE YEARS 284)

H-INDEX

30
(FIVE YEARS 8)

2022 ◽  
Vol 59 (2) ◽  
pp. 102822
Author(s):  
Anzhong Huang ◽  
Yuling Zhang ◽  
Jianping Peng ◽  
Hong Chen

2022 ◽  
Vol 15 ◽  
Author(s):  
Chongwen Wang ◽  
Zicheng Wang

Facial action unit (AU) detection is an important task in affective computing and has attracted extensive attention in the field of computer vision and artificial intelligence. Previous studies for AU detection usually encode complex regional feature representations with manually defined facial landmarks and learn to model the relationships among AUs via graph neural network. Albeit some progress has been achieved, it is still tedious for existing methods to capture the exclusive and concurrent relationships among different combinations of the facial AUs. To circumvent this issue, we proposed a new progressive multi-scale vision transformer (PMVT) to capture the complex relationships among different AUs for the wide range of expressions in a data-driven fashion. PMVT is based on the multi-scale self-attention mechanism that can flexibly attend to a sequence of image patches to encode the critical cues for AUs. Compared with previous AU detection methods, the benefits of PMVT are 2-fold: (i) PMVT does not rely on manually defined facial landmarks to extract the regional representations, and (ii) PMVT is capable of encoding facial regions with adaptive receptive fields, thus facilitating representation of different AU flexibly. Experimental results show that PMVT improves the AU detection accuracy on the popular BP4D and DISFA datasets. Compared with other state-of-the-art AU detection methods, PMVT obtains consistent improvements. Visualization results show PMVT automatically perceives the discriminative facial regions for robust AU detection.


2022 ◽  
Author(s):  
Delphine Caruelle ◽  
Poja Shams ◽  
Anders Gustafsson ◽  
Line Lervik-Olsen

AbstractAfter years of using AI to perform cognitive tasks, marketing practitioners can now use it to perform tasks that require emotional intelligence. This advancement is made possible by the rise of affective computing, which develops AI and machines capable of detecting and responding to human emotions. From market research, to customer service, to product innovation, the practice of marketing will likely be transformed by the rise of affective computing, as preliminary evidence from the field suggests. In this Idea Corner, we discuss this transformation and identify the research opportunities that it offers.


2022 ◽  
Vol 12 (1) ◽  
pp. 1-15
Author(s):  
Liu Hsin Lan ◽  
Lin Hao-Chiang Koong ◽  
Liang Yu-Chen ◽  
Zeng Yu-cheng ◽  
Zhan Kai-cheng ◽  
...  

People's motions or behaviors often ensue from these positive or negative emotions. Set off either subconsciously or intentionally, these fragmentary responses also represent people's emotional vacillations at different times, albeit rarely noted or discovered. This system incorporates affective computing into an interactive installation: While a user is performing an operation, the system instantaneously and randomly generates corresponding musical instrument sound effects and special effects. The system is intended to enable users to interact with emotions through the interactive installation to yield a personalized digital artwork as well learning about how emotions affect the causative factors of consciousness and personal behaviors. At the end of the process, this project design renders three questionnaires for users to fill in as a means to enhance the integrity and richness of the system with a survey and to further increase the stability and precision of the system through progressive modifications aligned with user suggestions.


2022 ◽  
pp. 86-103
Author(s):  
Luca Bondin ◽  
Alexiei Dingli

For a long time, the primary approach to control pain in patients involved the use of specially designed drugs. While these drugs have proved to be sufficient to reduce the perception of pain in patients of all ages, they do not come without any potential side effects. The prolonged use of such drugs can have adverse effects on the health of a patient. This research proposes a shift away from such practices and proposes the use of technology as an adjunct tool to help patients cope with pain. Through the adoption of affective computing, this research presents Morpheus results obtained at the time of writing confirm that the approach presented by this research is indeed useful and can achieve results that are in line with the set out as part of this study. The authors firmly believe that the approach presented in this research can lay the foundations for the research and development of similar applications in pain reduction scenarios.


Author(s):  
Luma Tabbaa ◽  
Ryan Searle ◽  
Saber Mirzaee Bafti ◽  
Md Moinul Hossain ◽  
Jittrapol Intarasisrisawat ◽  
...  

The paper introduces a multimodal affective dataset named VREED (VR Eyes: Emotions Dataset) in which emotions were triggered using immersive 360° Video-Based Virtual Environments (360-VEs) delivered via Virtual Reality (VR) headset. Behavioural (eye tracking) and physiological signals (Electrocardiogram (ECG) and Galvanic Skin Response (GSR)) were captured, together with self-reported responses, from healthy participants (n=34) experiencing 360-VEs (n=12, 1--3 min each) selected through focus groups and a pilot trial. Statistical analysis confirmed the validity of the selected 360-VEs in eliciting the desired emotions. Preliminary machine learning analysis was carried out, demonstrating state-of-the-art performance reported in affective computing literature using non-immersive modalities. VREED is among the first multimodal VR datasets in emotion recognition using behavioural and physiological signals. VREED is made publicly available on Kaggle1. We hope that this contribution encourages other researchers to utilise VREED further to understand emotional responses in VR and ultimately enhance VR experiences design in applications where emotional elicitation plays a key role, i.e. healthcare, gaming, education, etc.


2021 ◽  
Vol 12 ◽  
Author(s):  
Erin Smith ◽  
Eric A. Storch ◽  
Ipsit Vahia ◽  
Stephen T. C. Wong ◽  
Helen Lavretsky ◽  
...  

Affective computing (also referred to as artificial emotion intelligence or emotion AI) is the study and development of systems and devices that can recognize, interpret, process, and simulate emotion or other affective phenomena. With the rapid growth in the aging population around the world, affective computing has immense potential to benefit the treatment and care of late-life mood and cognitive disorders. For late-life depression, affective computing ranging from vocal biomarkers to facial expressions to social media behavioral analysis can be used to address inadequacies of current screening and diagnostic approaches, mitigate loneliness and isolation, provide more personalized treatment approaches, and detect risk of suicide. Similarly, for Alzheimer's disease, eye movement analysis, vocal biomarkers, and driving and behavior can provide objective biomarkers for early identification and monitoring, allow more comprehensive understanding of daily life and disease fluctuations, and facilitate an understanding of behavioral and psychological symptoms such as agitation. To optimize the utility of affective computing while mitigating potential risks and ensure responsible development, ethical development of affective computing applications for late-life mood and cognitive disorders is needed.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261167
Author(s):  
Hippokratis Apostolidis ◽  
Thrasyvoulos Tsiatsos

There is a developing interdisciplinary research field which has been trying to integrate results and expertise from various scientific areas, such as affective computing, pedagogical methodology and psychological appraisal theories, into learning environments. Moreover, anxiety recognition and regulation has attracted the interest of researchers as an important factor in the implementation of advanced learning environments. The present article explores the test anxiety and stress awareness of university students who are attending a science course during examinations. Real-time anxiety awareness as provided by biofeedback during science exams in an academic environment is shown to have a positive effect on the anxiety students experience and on their self-efficacy regarding examinations. Furthermore, the relevant research identifies a significant relationship between the students’ anxiety level and their performance. Finally, the current study indicates that the students’ anxiety awareness as provided by biofeedback is related to their performance, a relationship that is mediated and explained by the students’ anxiety.


2021 ◽  
Vol 2 ◽  
Author(s):  
Kenneth M. Prkachin ◽  
Zakia Hammal

Pain is often characterized as a fundamentally subjective phenomenon; however, all pain assessment reduces the experience to observables, with strengths and limitations. Most evidence about pain derives from observations of pain-related behavior. There has been considerable progress in articulating the properties of behavioral indices of pain; especially, but not exclusively those based on facial expression. An abundant literature shows that a limited subset of facial actions, with homologs in several non-human species, encode pain intensity across the lifespan. Unfortunately, acquiring such measures remains prohibitively impractical in many settings because it requires trained human observers and is laborious. The advent of the field of affective computing, which applies computer vision and machine learning (CVML) techniques to the recognition of behavior, raised the prospect that advanced technology might overcome some of the constraints limiting behavioral pain assessment in clinical and research settings. Studies have shown that it is indeed possible, through CVML, to develop systems that track facial expressions of pain. There has since been an explosion of research testing models for automated pain assessment. More recently, researchers have explored the feasibility of multimodal measurement of pain-related behaviors. Commercial products that purport to enable automatic, real-time measurement of pain expression have also appeared. Though progress has been made, this field remains in its infancy and there is risk of overpromising on what can be delivered. Insufficient adherence to conventional principles for developing valid measures and drawing appropriate generalizations to identifiable populations could lead to scientifically dubious and clinically risky claims. There is a particular need for the development of databases containing samples from various settings in which pain may or may not occur, meticulously annotated according to standards that would permit sharing, subject to international privacy standards. Researchers and users need to be sensitive to the limitations of the technology (for e.g., the potential reification of biases that are irrelevant to the assessment of pain) and its potentially problematic social implications.


2021 ◽  
Vol 11 (24) ◽  
pp. 11738
Author(s):  
Thomas Teixeira ◽  
Éric Granger ◽  
Alessandro Lameiras Koerich

Facial expressions are one of the most powerful ways to depict specific patterns in human behavior and describe the human emotional state. However, despite the impressive advances of affective computing over the last decade, automatic video-based systems for facial expression recognition still cannot correctly handle variations in facial expression among individuals as well as cross-cultural and demographic aspects. Nevertheless, recognizing facial expressions is a difficult task, even for humans. This paper investigates the suitability of state-of-the-art deep learning architectures based on convolutional neural networks (CNNs) to deal with long video sequences captured in the wild for continuous emotion recognition. For such an aim, several 2D CNN models that were designed to model spatial information are extended to allow spatiotemporal representation learning from videos, considering a complex and multi-dimensional emotion space, where continuous values of valence and arousal must be predicted. We have developed and evaluated convolutional recurrent neural networks, combining 2D CNNs and long short term-memory units and inflated 3D CNN models, which are built by inflating the weights of a pre-trained 2D CNN model during fine-tuning, using application-specific videos. Experimental results on the challenging SEWA-DB dataset have shown that these architectures can effectively be fine-tuned to encode spatiotemporal information from successive raw pixel images and achieve state-of-the-art results on such a dataset.


Sign in / Sign up

Export Citation Format

Share Document