scholarly journals The effect of perceptual adaptation to dynamic facial expressions

2017 ◽  
Vol 10 (1) ◽  
pp. 67-88
Author(s):  
O.A. Korolkova

We present three experiments investigating the perceptual adaptation to dynamic facial emotional expressions. Dynamic expressions of six basic emotions were obtained by video recording of a poser’s face. In Experiment 1 participants (n=20) evaluated the intensity of 6 emotions, neutral state, genuineness and naturalness of dynamic expressions. The validated stimuli were further used as adaptors in Experiments 2 and 3 aimed at exploring the structure of facial expressions perceptual space by adaptation effects. In Experiment 2 participants (n=16) categorized neutral/emotion morphs after adaptation to dynamic expressions. In Experiment 3 (n=26) the task of the first stage was to categorize static frames derived from video records of the poser. Next individual psychometric functions were fitted for each participant and each emotion, to find the frame with emotion recognized correctly in 50% trials. These latter images were presented on the second stage in adaptation experiment, with dynamic video records as adaptors. Based on the three experiments, we found that facial expressions of happiness and sadness are perceived as opponent emotions and mutually facilitate the recognition of each other, whereas disgust and anger, and fear and surprise are perceptually similar and reduce the recognition accuracy of each other. We describe the categorical fields of dynamic facial expressions and of static images of initial phases of expression development. The obtained results suggest that dimensional and categorical approaches to perception of emotions are not mutually exclusive and probably describe different stages of face information processing. The study was supported by the Russian Foundation for Basic Research, project № 15-36-01281 “Structure of dynamic facial expressions perception”.

2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2015 ◽  
Vol 21 (9) ◽  
pp. 709-721 ◽  
Author(s):  
Lucy J. Robinson ◽  
John M. Gray ◽  
Mike Burt ◽  
I. Nicol Ferrier ◽  
Peter Gallagher

AbstractPrevious studies of facial emotion processing in bipolar disorder (BD) have reported conflicting findings. In independently conducted studies, we investigate facial emotion labeling in euthymic and depressed BD patients using tasks with static and dynamically morphed images of different emotions displayed at different intensities. Study 1 included 38 euthymic BD patients and 28 controls. Participants completed two tasks: labeling of static images of basic facial emotions (anger, disgust, fear, happy, sad) shown at different expression intensities; the Eyes Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), which involves recognition of complex emotions using only the eye region of the face. Study 2 included 53 depressed BD patients and 47 controls. Participants completed two tasks: labeling of “dynamic” facial expressions of the same five basic emotions; the Emotional Hexagon test (Young, Perret, Calder, Sprengelmeyer, & Ekman, 2002). There were no significant group differences on any measures of emotion perception/labeling, compared to controls. A significant group by intensity interaction was observed in both emotion labeling tasks (euthymia and depression), although this effect did not survive the addition of measures of executive function/psychomotor speed as covariates. Only 2.6–15.8% of euthymic patients and 7.8–13.7% of depressed patients scored below the 10th percentile of the controls for total emotion recognition accuracy. There was no evidence of specific deficits in facial emotion labeling in euthymic or depressed BD patients. Methodological variations—including mood state, sample size, and the cognitive demands of the tasks—may contribute significantly to the variability in findings between studies. (JINS, 2015, 21, 709–721)


2000 ◽  
Vol 29 (544) ◽  
Author(s):  
Dolores Canamero ◽  
Jakob Fredslund

We report work on a LEGO robot capable of displaying several emotional expressions in response to physical contact. Our motivation has been to explore believable emotional exchanges to achieve plausible interaction with a simple robot. We have worked toward this goal in two ways. <p>First, acknowledging the importance of physical manipulation in children's interactions, interaction with the robot is through tactile stimulation; the various kinds of stimulation that can elicit the robot's emotions are grounded in a model of emotion activation based on different stimulation patterns.</p><p>Second, emotional states need to be clearly conveyed. We have drawn inspiration from theories of human basic emotions with associated universal facial expressions, which we have implemented in a caricaturized face. We have conducted experiments on both children and adults to assess the recognizability of these expressions.</p>


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 28-28
Author(s):  
A J Calder ◽  
A W Young ◽  
D Rowland ◽  
D R Gibbenson ◽  
B M Hayes ◽  
...  

G Rhodes, S E Brennan, S Carey (1987 Cognitive Psychology19 473 – 497) and P J Benson and D I Perrett (1991 European Journal of Cognitive Psychology3 105 – 135) have shown that computer-enhanced (caricatured) representations of familiar faces are named faster and rated as better likenesses than veridical (undistorted) representations. Here we have applied Benson and Perrett's graphic technique to examine subjects' perception of enhanced representations of photographic-quality facial expressions of basic emotions. To enhance a facial expression the target face is compared to a norm or prototype face, and, by exaggerating the differences between the two, a caricatured image is produced; reducing the differences results in an anticaricatured image. In experiment 1 we examined the effect of degree of caricature and types of norm on subjects' ratings for ‘intensity of expression’. Three facial expressions (fear, anger, and sadness) were caricatured at seven levels (−50%, −30%, −15%, 0%, +15%, +30%, and +50%) relative to three different norms; (1) an average norm prepared by blending pictures of six different emotional expressions; (2) a neutral expression norm; and (3) a different expression norm (eg anger caricatured relative to a happy expression). Irrespective of norm, the caricatured expressions were rated as significantly more intense than the veridical images. Furthermore, for the average and neutral norm sets, the anticaricatures were rated as significantly less intense. We also examined subjects' reaction times to recognise caricatured (−50%, 0%, and +50%) representations of six emotional facial expressions. The results showed that the caricatured images were identified fastest, followed by the veridical, and then anticaricatured images. Hence the perception of facial expression and identity is facilitated by caricaturing; this has important implications for the mental representation of facial expressions.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2020 ◽  
Vol 36 (2) ◽  
pp. 3-11
Author(s):  
O.A. Zhuravliova ◽  
Т.А. Voeikova ◽  
A.Yu. Gulevich ◽  
V.G. Debabov

The plasmidless and markerless Escherichia coli succinate producing strain SGM2.0Pyc-int has been engineered and characterized. The strain has the inactivated main mixed-acid fermentation pathways due to the deletions of ldhA,poxB, ackA,pta, and adhE genes, constitutively expresses the genes of the aceEF-lpdA operon encoding components of pyravate dehydrogenase complex, and possesses the chromosomally integrated Bacillus subtilis pycA gene coding for pyruvate carboxylase. The capacity of the strain to synthesize succinic acid in course of dual-phase aerobic-anaerobic fermentation with lignocellulosic sugars as substrates was studied. The SGM2.0Pyc-int strain synthesized succinic acid from glucose, xylose, and arabinose with a molar yields of 1.41 mol/mol, 1.18 mol/mol, and 1.18 mol/mol, respectively, during the anaerobic production stage. The constructed strain has great potential for developing efficient processes for the succinic acid production from plant biomass-derived sugars. Escherichia coli, fermentation, arabinose, glucose, xylose, succinic acid. The work was supported by a Grant from the Russian Foundation for Basic Research (Project no. 18-29-14005).


2020 ◽  
Vol 36 (1) ◽  
pp. 36-43
Author(s):  
I.O. Konovalova ◽  
T.N. Kudelina ◽  
S.O. Smolyanina ◽  
A.I. Lilienberg ◽  
T.N. Bibikova

A new technique for Arabidopsis thaliana cultivation has been proposed that combines the use of a phytogel-based nutrient medium and a hydrophilic membrane of hydrate cellulose film, separating the root system of the plant from the medium thickness. Growth rates of both main and lateral roots were faster in the plants cultivated on the surface of hydrate cellulose film than in the plants grown in the phytogel volume. The location of the root system on the surface of the transparent hydrate film simplifies its observation and analysis and facilitates plant transplantation with preservation of the root system configuration. The proposed technique allowed us to first assess the effect of exogenous auxin on the growth of lateral roots at the 5-6 developmental stage. methods to study plant root systems, hydrate cellulose film, A. thaliana, lateral roots, differential root growth rate, auxin The work was financially supported by the Russian Foundation for Basic Research (Project Bel_mol_a 19-54-04015) and the basic topic of the Russian Academy of Sciences - IBMP RAS «Regularities of the Influence of Extreme Environmental Factors on the Processes of Cultivation of Higher Plants and the Development of Japanese Quail Tissues at Different Stages of its Ontogenesis under the Conditions of Regenerative Life Support Systems».


2021 ◽  
Vol 151 ◽  
pp. 107734
Author(s):  
Katia M. Harlé ◽  
Alan N. Simmons ◽  
Jessica Bomyea ◽  
Andrea D. Spadoni ◽  
Charles T. Taylor

Sign in / Sign up

Export Citation Format

Share Document