scholarly journals Looking for Infrequent Faces: Visual Patterns and Gender Effect in Detecting Crying and Smiling Expressions

SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402092335
Author(s):  
Rong Shi

Previous research has focused on documenting the perceptual mechanisms of facial expressions of so-called basic emotions; however, little is known about eye movement in terms of recognizing crying expressions. The present study aimed to clarify the visual pattern and the role of face gender in recognizing smiling and crying expressions. Behavioral reactions and fixations duration were recorded, and proportions of fixation counts and viewing time directed at facial features (eyes, nose, and mouth area) were calculated. Results indicated that crying expressions could be processed and recognized faster than that of smiling expressions. Across these expressions, eyes and nose area received more attention than mouth area, but in smiling facial expressions, participants fixated longer on the mouth area. It seems that proportional gaze allocation at facial features was quantitatively modulated by different expressions, but overall gaze distribution was qualitatively similar across crying and smiling facial expressions. Moreover, eye movements showed visual attention was modulated by the gender of faces: Participants looked longer at female faces with smiling expressions relative to male faces. Findings are discussed around the perceptual mechanisms underlying facial expressions recognition and the interaction between gender and expression processing.

2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2021 ◽  
Author(s):  
Nicole X Han ◽  
Puneeth N. Chakravarthula ◽  
Miguel P. Eckstein

Face processing is a fast and efficient process due to its evolutionary and social importance. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing a person's identity and gender. Yet, the exact properties or features of the face that guide the first eye movements and reduce fixational variability are unknown. Here, we manipulated the presence of the facial features and the spatial configuration of features to investigate their effect on the location and variability of first and second fixations to peripherally presented faces. Results showed that observers can utilize the face outline, individual facial features, and feature spatial configuration to guide the first eye movements to their preferred point of fixation. The eyes have a preferential role in guiding the first eye movements and reducing fixation variability. Eliminating the eyes or altering their position had the greatest influence on the location and variability of fixations and resulted in the largest detriment to face identification performance. The other internal features (nose and mouth) also contribute to reducing fixation variability. A subsequent experiment measuring detection of single features showed that the eyes have the highest detectability (relative to other features) in the visual periphery providing a strong sensory signal to guide the oculomotor system. Together, the results suggest a flexible multiple-cue approach that might be a robust solution to cope with how the varying eccentricities in the real world influence the ability to resolve individual feature properties and the preferential role of the eyes.


2017 ◽  
Vol 7 (2) ◽  
pp. 177-202
Author(s):  
James A. Clinton ◽  
Stephen W. Briner ◽  
Andrew M. Sherrill ◽  
Thomas Ackerman ◽  
Joseph P. Magliano

Abstract Filmmakers must rely on cinematic devices of perspective (close-ups and point-of-view shot sequencing) to emphasize facial expressions associated with affective states. This study explored the extent to which differences in the use of these devices across two films that have the same content lead to differences in the understanding of the affective states of characters. Participants viewed one of two versions of the films and made affective judgments about how characters felt about one another with respect to saddness and anger. The extent to which the auditory and visual contexts were present when making the judgments was varied across four experiments. The results of the study showed judgments about sadness differed across the two films, but only when the entire context (sound and visual input) were present. The results are discussed in the context of the role of facial expressions and context in inferring basic emotions.


2020 ◽  
pp. 030573562095846
Author(s):  
Nieves Fuentes-Sánchez ◽  
Raúl Pastor ◽  
Tuomas Eerola ◽  
M Carmen Pastor

The literature review reveals different conceptual and methodological challenges in the field of music and emotion, such as the lack of agreement in terms of standardized datasets, and the need for replication of prior findings. Our study aimed at validating for Spanish population a set of film music stimuli previously standardized in Finnish samples. In addition, we explored the role of gender and culture in the perception of emotions through music using 102 excerpts selected from Eerola and Vuoskoski’s dataset. A total of 129 voluntary undergraduate students (71.32% females) from different degrees participated voluntarily in this study, where they were instructed to rate both discrete emotions (Happiness, Sadness, Tenderness, Fear, Anger) and affective dimensions (Valence, Energy Arousal, Tension Arousal) using a 9-point scale after presentation of each excerpt. Strong similarities between Finnish and Spanish ratings were found, with only minor discrepancies across samples in the evaluation of basic emotions. Taken together, our findings suggest that the current database is suitable for future research on music and emotions. Additional theoretical and practical implications of this validation are discussed.


Emotion Recognition is of significance in the modern scenario. Among the many ways to perform it, one of them is through facial expression detection since it is a spontaneous arousal of mental state rather than a conscious effort. Sometimes emotions rule us in the form of the choices, actions and perceptions which are in turn, a result of the emotions we are overpowered by. Happiness, sadness, fear, disgust, anger, neutral and surprise are the seven basic emotions expressed by a human most frequently. In this period of automation and human computer interaction, it is a very difficult and tedious job to make the machines detect the emotions. Facial expressions are the medium through which emotions are shown. For one to detect the facial expression of a person, colour, orientation, lighting and posture play significant importance. Hence, the movements associated with eye, nose, lips etc. plays major role in differentiating the facial features. These facial features are then classified and compared through the trained data. In this paper, we have constructed a Convolution Neural Network (CNN) model and then recognised different emotions for a particular dataset. We have found the accuracy of the model and our main aim is to minimise the loss. We have made use of Adam’s optimizer and used loss function as sparse categorical crossentropy and activation function as softmax. The results which we have got are quite accurate and can be used for further research in this field.


2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.


2013 ◽  
Vol 44 (2) ◽  
pp. 232-238
Author(s):  
Władysław Łosiak ◽  
Joanna Siedlecka

Abstract Deficits in recognition of facial expressions of emotions are considered to be an important factor explaining impairments in social functioning and affective reactions of schizophrenic patients. Many studies confirmed such deficits while controversies remained concerning the emotion valence and modality. The aim of the study was to explore the process of recognizing facial expressions of emotion in the group of schizophrenic patients by analyzing the role of emotion valence, modality and gender of the model. Results of the group of 35 patients and 35 matched controls indicate that while schizophrenic patients show general impairment in recognizing facial expressions of both positive and the majority of negative emotions, there are differences in deficits for particular emotions. Expressions also appeared to be more ambiguous for the patients while variables connected with gender were found less significant.


Crisis ◽  
2020 ◽  
pp. 1-8
Author(s):  
Chao S. Hu ◽  
Jiajia Ji ◽  
Jinhao Huang ◽  
Zhe Feng ◽  
Dong Xie ◽  
...  

Abstract. Background: High school and university teachers need to advise students against attempting suicide, the second leading cause of death among 15–29-year-olds. Aims: To investigate the role of reasoning and emotion in advising against suicide. Method: We conducted a study with 130 students at a university that specializes in teachers' education. Participants sat in front of a camera, videotaping their advising against suicide. Three raters scored their transcribed advice on "wise reasoning" (i.e., expert forms of reasoning: considering a variety of conditions, awareness of the limitation of one's knowledge, taking others' perspectives). Four registered psychologists experienced in suicide prevention techniques rated the transcripts on the potential for suicide prevention. Finally, using the software Facereader 7.1, we analyzed participants' micro-facial expressions during advice-giving. Results: Wiser reasoning and less disgust predicted higher potential for suicide prevention. Moreover, higher potential for suicide prevention was associated with more surprise. Limitations: The actual efficacy of suicide prevention was not assessed. Conclusion: Wise reasoning and counter-stereotypic ideas that trigger surprise probably contribute to the potential for suicide prevention. This advising paradigm may help train teachers in advising students against suicide, measuring wise reasoning, and monitoring a harmful emotional reaction, that is, disgust.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2000 ◽  
Author(s):  
Erika Felix ◽  
Anjali T. Naik-Polan ◽  
Christine Sloss ◽  
Lashaunda Poindexter ◽  
Karen S. Budd

Sign in / Sign up

Export Citation Format

Share Document