scholarly journals Men’s Baldness Stigma and Body Dissatisfaction

2021 ◽  
Vol 4 (1) ◽  
pp. e68-e82
Author(s):  
Glen Jankowski ◽  
Michael Sherwin ◽  
Nova Deighton-Smith

Introduction: Head hair forms a central component of the sociocultural male appearance ideal (e.g.,mesomorphic, tall, young and not bald) and carries masculine connotations and stigma. Immense pressures to conform to this male appearance ideal gives rise to body dissatisfaction. Previous assessments of body dissatisfaction are too narrow, ignoring dissatisfaction beyond mesomorphy such as baldness dissatisfaction. Our study involved two research questions: (i) Do the facial expressions assigned to images of bald and non-bald men differ? and (ii) What forms of body dissatisfaction, including baldness dissatisfaction, do men have and are these related to men’s wellbeing and muscularity behaviours? Method: Eighty-six male participants aged 18–58 years (mean = 23.62; standard deviation = 7.80) were randomly exposed to 10 images of smiling men (half balding and half not) and were asked to rate the facial expression displayed. Participants also rated their body dissatisfaction and wellbeing. Ethics statement: Institutional ethics approval was granted. Results: We found that participants interpreted the facial expressions of images of bald men slightly more negatively than non-bald men. Most participants reported some form of body dissatisfaction correlated with wellbeing and muscularity enhancing behaviours, albeit weakly. Participants also disclosed a range of body dissatisfaction aspects (including surrounding muscularity, body fat, teeth alignment, skin tone and facial hair amount) though generally were not impacted heavily nor highly dissatisfied. Conclusion: These findings underscore the complex challenge in producing a complete assessment of men’s body dissatisfaction and the general resilience men experience with extant appearance pressures around their bodies and head hair.

Author(s):  
José-Miguel Fernández-Dols ◽  
James A. Russell

One of the purposes of the present book is to provide an updated review of the current psychology of facial expression and to acknowledge the growing contribution of neuroscientists, biologists, anthropologists, linguists, and other scientists to this field. Our aim was to allow the readers—from lay to practitioners to research scientists—to discover the most recent scientific developments in the field and its associated questions and controversies. As will become obvious, the most fundamental questions, such as whether “facial expressions of emotion” in fact express emotions, remain subjects of great controversy. Just as important, readers will find that new research questions and proposals are animating this field.


2000 ◽  
Vol 8 (1) ◽  
pp. 185-235 ◽  
Author(s):  
Christine L. Lisetti ◽  
Diane J. Schiano

We discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches in computer vision, and we describe the design of our facial expression recognizer. We also give some background knowledge about our motivation for understanding facial expressions and we propose an architecture for a multimodal intelligent interface capable of recognizing and adapting to computer users’ affective states. Finally, we discuss current interdisciplinary issues and research questions which will need to be addressed for further progress to be made in the promising area of computational facial expression recognition.


2015 ◽  
Vol 17 (4) ◽  
pp. 443-455 ◽  

Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2019 ◽  
pp. 201-208
Author(s):  
Emeka Promise u. ◽  
Ohagwu Gold Chiamaka

This study was carried out to determine the measures for promoting democracy in a depressed economy through business education for national security in Enugu State. Two research questions and two null hypotheses were used for the study. The study adopted a survey research design. The population for the study was 41 business educators from four government owned tertiary institutions in Enugu State. There was no sampling since the population was manageable. The instrument for data collection was a structured questionnaire developed by the researchers and validated by the experts. The reliability of the instrument was determined using Cronbach Alpha, which yielded an overall index 0.72. Mean and standard deviation were used in answering research questions while hypotheses were tested using t-test. It was found that governmental measures items promoted democracy through business education for national security. The study also revealed that lecturers‟ measures also promotes democracy through business education for national security. It was recommended that: government should make adequate budgetary provision for business education. Democrats should be involved in business teacher‟s conferences and seminars.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2017 ◽  
Vol 9 (4) ◽  
pp. 375-382 ◽  
Author(s):  
David Matsumoto ◽  
Hyisung C. Hwang

We discuss four methodological issues regarding cross-cultural judgment studies of facial expressions of emotion involving design, sampling, stimuli, and dependent variables. We use examples of relatively recent studies in this area to highlight and discuss these issues. We contend that careful consideration of these, and other, cross-cultural methodological issues can help researchers minimize methodological errors, and can guide the field to address new and different research questions that can continue to facilitate an evolution in the field’s thinking about the nature of culture, emotion, and facial expressions.


Sign in / Sign up

Export Citation Format

Share Document