scholarly journals Relationship Between Facial Areas With the Greatest Increase in Non-local Contrast and Gaze Fixations in Recognizing Emotional Expressions

Author(s):  
Vitaliy Babenko ◽  
Denis Yavna ◽  
Elena Vorobeva ◽  
Ekaterina Denisova ◽  
Pavel Ermakov ◽  
...  

The aim of our study was to analyze gaze fixations in recognizing facial emotional expressions in comparison with o the spatial distribution of the areas with the greatest increase in the total (nonlocal) luminance contrast. It is hypothesized that the most informative areas of the image that getting more of the observer’s attention are the areas with the greatest increase in nonlocal contrast. The study involved 100 university students aged 19-21 with normal vision. 490 full-face photo images were used as stimuli. The images displayed faces of 6 basic emotions (Ekman’s Big Six) as well as neutral (emotionless) expressions. Observer’s eye movements were recorded while they were the recognizing expressions of the shown faces. Then, using a developed software, the areas with the highest (max), lowest (min), and intermediate (med) increases in the total contrast in comparison with the surroundings were identified in the stimulus images at different spatial frequencies. Comparative analysis of the gaze maps with the maps of the areas with min, med, and max increases in the total contrast showed that the gaze fixations in facial emotion classification tasks significantly coincide with the areas characterized by the greatest increase in nonlocal contrast. Obtained results indicate that facial image areas with the greatest increase in the total contrast, which preattentively detected by second-order visual mechanisms, can be the prime targets of the attention.

Author(s):  
Eleonora Cannoni ◽  
Giuliana Pinto ◽  
Anna Silvia Bombi

AbstractThis study was aimed at verifying if children introduce emotional expressions in their drawings of human faces, and if a preferential expression exists; we also wanted to verify if children’s pictorial choices change with increasing age. To this end we examined the human figure drawings made by 160 boys and 160 girls, equally divided in 4 age groups: 6–7; 8–9; 10–11; 12–13 years; mean ages (SD in parentheses) were: 83,30 (6,54); 106,14 (7,16) 130,49 (8,26); 155,40 (6,66). Drawings were collected with the Draw-a-Man test instructions, i.e. without mentioning an emotional characterization. In the light of data from previous studies of emotion drawing on request, and the literature about preferred emotional expressions, we expected that an emotion would be portrayed even by the younger participants, and that the preferred emotion would be happiness. We also expected that with the improving ability to keep into account both mouth and eyes appearance, other expressions would be found besides the smiling face. Data were submitted to non-parametric tests to compare the frequencies of expressions (absolute and by age) and the frequencies of visual cues (absolute and by age and expressions). The results confirmed that only a small number of faces were expressionless, and that the most frequent emotion was happiness. However, with increasing age this representation gave way to a variety of basic emotions (sadness, fear, anger, surprise), whose representation may depend from the ability to modify the shapes of both eyes and mouth and changing communicative aims of the child.


Author(s):  
Jorge Bacca-Acosta ◽  
Julian Tejada ◽  
Carlos Ospino-Ibañez

Learning how to give and follow directions in English is one of the key topics in regular English as a Foreign Language (EFL) courses. However, this topic is commonly taught in the classroom with pencil and paper exercises. In this chapter, a scaffolded virtual reality (VR) environment for learning the topic of following directions in English is introduced. An eye tracking study was conducted to determine how students perceive the scaffolds for completing the learning task, and an evaluation of acceptance and usability was conducted to identify the students' perceptions. The results show that scaffolds in the form of text and images are both effective for increasing the students' learning performance. The gaze frequency is higher for the textual scaffold, but the duration of gaze fixations is lower for the scaffolds in the form of images. The acceptance and usability of the VR environment were found to be positive.


2009 ◽  
Vol 26 (4) ◽  
pp. 411-420 ◽  
Author(s):  
MICHAEL L. RISNER ◽  
TIMOTHY J. GAWNE

AbstractNeurons in visual cortical area V1 typically respond well to lines or edges of specific orientations. There have been many studies investigating how the responses of these neurons to an oriented edge are affected by changes in luminance contrast. However, in natural images, edges vary not only in contrast but also in the degree of blur, both because of changes in focus and also because shadows are not sharp. The effect of blur on the response dynamics of visual cortical neurons has not been explored. We presented luminance-defined single edges in the receptive fields of parafoveal (1–6 deg eccentric) V1 neurons of two macaque monkeys trained to fixate a spot of light. We varied the width of the blurred region of the edge stimuli up to 0.36 deg of visual angle. Even though the neurons responded robustly to stimuli that only contained high spatial frequencies and 0.36 deg is much larger than the limits of acuity at this eccentricity, changing the degree of blur had minimal effect on the responses of these neurons to the edge. Primates need to measure blur at the fovea to evaluate image quality and control accommodation, but this might only involve a specialist subpopulation of neurons. If visual cortical neurons in general responded differently to sharp and blurred stimuli, then this could provide a cue for form perception, for example, by helping to disambiguate the luminance edges created by real objects from those created by shadows. On the other hand, it might be important to avoid the distraction of changing blur as objects move in and out of the plane of fixation. Our results support the latter hypothesis: the responses of parafoveal V1 neurons are largely unaffected by changes in blur over a wide range.


2019 ◽  
Author(s):  
Eva Krumhuber ◽  
Dennis Küster ◽  
Shushi Namba ◽  
Datin Shah ◽  
Manual Calvo

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification, and may perform just as well as humans when tested in a cross-corpora context.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shu Zhang ◽  
Xinge Liu ◽  
Xuan Yang ◽  
Yezhi Shu ◽  
Niqi Liu ◽  
...  

Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.


2020 ◽  
Author(s):  
Abigail Webb ◽  
Paul Hibbard

Fearful facial expressions tend to be more salient than other expressions. This threatbias is to some extent driven by simple low-level image properties, rather than thehigh-level emotion interpretation of stimuli. It might be expected therefore that differentexpressions will, on average, have different physical contrasts. However, studies tendto normalise stimuli for contrast, potentially removing a naturally-occurring difference insalience. We assessed whether images of faces differ in both physical and apparentcontrast across expressions. We measured physical contrast and the Fourieramplitude spectra of 5 emotional expressions prior to contrast normalisation. We alsomeasured expression-related differences in perceived contrast. Fear expressions havea steeper Fourier amplitude slope compared to neutral and angry expressions, andconsistently significantly lower contrast compared to other faces. This effect is morepronounced at higher spatial frequencies. With the exception of stimuli containing onlylow spatial frequencies, fear expressions appeared higher in contrast than a physicallymatched reference. These findings suggest that contrast normalisation artificiallyboosts the perceived salience of fear expressions; an effect that may account forperceptual biases observed for spatially filtered fear expressions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alberto Domínguez-Vicent ◽  
Emma Helghe ◽  
Marika Wahlberg Ramsay ◽  
Abinaya Priya Venkataraman

Purpose: The aim of this study was to evaluate the effect of four different filters on contrast sensitivity under photopic and mesopic conditions with and without glare.Methods: A forced choice algorithm in a Bayesian psychophysical procedure was utilized to evaluate the spatial luminance contrast sensitivity. Five different spatial frequencies were evaluated: 1.5, 3, 6, 12, and 18 cycles per degree (cpd). The measurements were performed under 4 settings: photopic and mesopic luminance with glare and no glare. Two long pass filters (LED light reduction and 511nm filter) and two selective absorption filters (ML41 and emerald filter) and a no filter condition were evaluated. The measurements were performed in 9 young subjects with healthy eyes.Results: For the no filter condition, there was no difference between glare and no glare settings for the photopic contrast sensitivity measurements whereas in the mesopic setting, glare reduced the contrast sensitivity significantly at all spatial frequencies. There was no statistically significant difference between contrast sensitivity measurements obtained with different filters under both photopic conditions and the mesopic glare condition. In the mesopic no glare condition, the contrast sensitivity at 6 cpd with 511, ML41 and emerald filters was significantly reduced compared to no filter condition (p = 0.045, 0.045, and 0.071, respectively). Similarly, with these filters the area under the contrast sensitivity function in the mesopic no glare condition was also reduced. A significant positive correlation was seen between the filter light transmission and the average AULCSF in the mesopic non-glare condition.Conclusion: The contrast sensitivity measured with the filters was not significantly different than the no filter condition in photopic glare and no glare setting as well as in mesopic glare setting. In mesopic setting with no glare, filters reduced contrast sensitivity.


2018 ◽  
Vol 145 ◽  
pp. 293-299
Author(s):  
Bogdan L. Kozyrskiy ◽  
Anastasia O. Ovchinnikova ◽  
Alena D. Moskalenko ◽  
Boris M. Velichkovsky ◽  
Sergei L. Shishkin

2013 ◽  
Vol 31 (1) ◽  
pp. 99-103 ◽  
Author(s):  
MÁRTA JANÁKY ◽  
JUDIT BORBÉLY ◽  
GYÖRGY BENEDEK ◽  
BALÁZS PÉTER KOCSIS ◽  
GÁBOR BRAUNITZER

AbstractIt is a matter of debate whether X-linked dichromacy is accompanied by enhanced achromatic processing. In the present study, we used sinusoidally modulated achromatic gratings under photopic conditions to compare the contrast sensitivity (CS) of protanopes, deuteranopes, and normal trichromats. 36 male volunteers were examined. CS was tested in static and dynamic conditions at nine different spatial frequencies. The results support the assumption that X-linked color-defective observers are at an advantage in terms of achromatic processing. Both protanopes and deuteranopes had significantly better CS than controls in both the static and the dynamic conditions. In the static condition, the advantage was observed especially at higher spatial frequencies, whereas in the dynamic condition, it was seen also at lower frequencies. The results are interpreted in terms of decreased chromatic modulation of the luminance channel and the early plasticity of the parvocellular system.


2017 ◽  
Vol 10 (4) ◽  
pp. 116-132 ◽  
Author(s):  
Y.G. Khoze ◽  
O.A. Korolkova ◽  
N.Yu. Zhizhnikova ◽  
M.V. Zubareva

The article presents the results of a cross-cultural study of the perception of basic emotional expressions by representatives of Asian and European cultural groups and compares them with the results obtained earlier on Russian sample. Emotional expressions from VEPEL database (Kurakova, 2012) were used as stimuli. We revealed invariant perception within a cultural group and cross-cultural differences in perception of the basic emotions of fear, disgust and anger between Asian and Russian cultural groups; in perception of surprise, fear, disgust and anger between European and Russian cultural groups; and in perception of fear, sadness, disgust and anger between European and Asian cultural groups.


Sign in / Sign up

Export Citation Format

Share Document