facial expressiveness
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 4)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Jayson Jeganathan ◽  
Michael Breakspear

Predictive coding has played a transformative role in the study of psychosis, casting delusions and hallucinations as statistical inference in an abnormally imprecise system. However, the negative symptoms of schizophrenia, such as affective blunting, avolition and asociality, remain poorly understood. We propose a computational framework for emotional expression that is based on active inference – namely that affective behaviours such as smiling are driven by predictions about the social consequences of smiling. Just as delusions and hallucinations can be explained by predictive uncertainty in sensory circuits, negative symptoms naturally arise from uncertainty in social prediction circuits. This perspective draws on computational principles to explain blunted facial expressiveness and apathy-anhedonia in schizophrenia. Its phenomenological consequences also shed light on the content of paranoid delusions and indistinctness of self-other boundaries. Close links are highlighted between social prediction, facial affect mirroring, and the fledgling study of interoception. Advances in automated analysis of facial expressions and acoustic speech patterns will allow empirical testing of these computational models of the negative symptoms of schizophrenia.


2020 ◽  
Vol 79 (47-48) ◽  
pp. 35829-35844
Author(s):  
Giuseppe Palestra ◽  
Olimpia Pino

AbstractThe attention towards robot-assisted therapies (RAT) had grown steadily in recent years particularly for patients with dementia. However, rehabilitation practice using humanoid robots for individuals with Mild Cognitive Impairment (MCI) is still a novel method for which the adherence mechanisms, indications and outcomes remain unclear. An effective computing represents a wide range of technological opportunities towards the employment of emotions to improve human-computer interaction. Therefore, the present study addresses the effectiveness of a system in automatically decode facial expression from video-recorded sessions of a robot-assisted memory training lasted two months involving twenty-one participants. We explored the robot’s potential to engage participants in the intervention and its effects on their emotional state. Our analysis revealed that the system is able to recognize facial expressions from robot-assisted group therapy sessions handling partially occluded faces. Results indicated reliable facial expressiveness recognition for the proposed software adding new evidence base to factors involved in Human-Robot Interaction (HRI). The use of a humanoid robot as a mediating tool appeared to promote the engagement of participants in the training program. Our findings showed positive emotional responses for females. Tasks affects differentially affective involvement. Further studies should investigate the training components and robot responsiveness.


2020 ◽  
Vol 51 (5) ◽  
pp. 685-711
Author(s):  
Alexandra Sierra Rativa ◽  
Marie Postma ◽  
Menno Van Zaanen

Background. Empathic interactions with animated game characters can help improve user experience, increase immersion, and achieve better affective outcomes related to the use of the game. Method. We used a 2x2 between-participant design and a control condition to analyze the impact of the visual appearance of a virtual game character on empathy and immersion. The four experimental conditions of the game character appearance were: Natural (virtual animal) with expressiveness (emotional facial expressions), natural (virtual animal) with non-expressiveness (without emotional facial expressions), artificial (virtual robotic animal) with expressiveness (emotional facial expressions), and artificial (virtual robotic animal) with non-expressiveness (without emotional facial expressions). The control condition contained a baseline amorphous game character. 100 participants between 18 to 29 years old (M=22.47) were randomly assigned to one of five experimental groups. Participants originated from several countries: Aruba (1), China (1), Colombia (3), Finland (1), France (1), Germany (1), Greece (2), Iceland (1), India (1), Iran (1), Ireland (1), Italy (3), Jamaica (1), Latvia (1), Morocco (3), Netherlands (70), Poland (1), Romania (2), Spain (1), Thailand (1), Turkey (1), United States (1), and Vietnam (1). Results. We found that congruence in appearance and facial expressions of virtual animals (artificial + non-expressive and natural + expressive) leads to higher levels of self-reported situational empathy and immersion of players in a simulated environment compared to incongruent appearance and facial expressions. Conclusions. The results of this investigation showed an interaction effect between artificial/natural body appearance and facial expressiveness of a virtual character’s appearance. The evidence from this study suggests that the appearance of the virtual animal has an important influence on user experience.


2018 ◽  
Vol 55 (5) ◽  
pp. 711-720 ◽  
Author(s):  
Zakia Hammal ◽  
Jeffrey F. Cohn ◽  
Erin R. Wallace ◽  
Carrie L. Heike ◽  
Craig B. Birgfeld ◽  
...  

Objective: To compare facial expressiveness (FE) of infants with and without craniofacial microsomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. Design: Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video using the Facial Action Coding System for Infants and Young Children. Setting: Five craniofacial centers: Children’s Hospital of Los Angeles, Children’s Hospital of Philadelphia, Seattle Children’s Hospital, University of Illinois–Chicago, and University of North Carolina–Chapel Hill. Participants: Eighty ethnically diverse 12- to 14-month-old infants. Main Outcome Measures: FE was measured on a frame-by-frame basis as the sum of 9 observed facial action units (AUs) representative of positive and negative affect. Results: FE differed between conditions intended to elicit positive and negative affect (95% confidence interval = 0.09-0.66, P = .01). FE failed to differ between cases and controls (ES = –0.16 to –0.02, P = .47 to .92). Among cases, those with and without mandibular hypoplasia showed similar levels of FE (ES = –0.38 to 0.54, P = .10 to .66). Conclusions: FE varied between positive and negative affect, and cases and controls responded similarly. Null findings for case/control differences may be attributable to a lower than anticipated prevalence of nerve palsy among cases, the selection of AUs, or the use of manual coding. In future research, we will reexamine group differences using an automated, computer vision approach that can cover a broader range of facial movements and their dynamics.


2016 ◽  
Vol 8 (4) ◽  
pp. 513-521 ◽  
Author(s):  
Albert De Beir ◽  
Hoang-Long Cao ◽  
Pablo Gómez Esteban ◽  
Greet Van de Perre ◽  
Dirk Lefeber ◽  
...  

2015 ◽  
Vol 358 (1-2) ◽  
pp. 125-130 ◽  
Author(s):  
Lucia Ricciardi ◽  
Matteo Bologna ◽  
Francesca Morgante ◽  
Diego Ricciardi ◽  
Bruno Morabito ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document