The effect of personality and social context on willingness to share emotion information on social media

2018 ◽  
Author(s):  
Damien Dupré ◽  
Nicole Andelic ◽  
Anna Zajac ◽  
Gawain Morrison ◽  
Gary John McKeown

Sharing personal information is an important way of communicating on social media. Among the information possibly shared, new sensors and tools allow people to share emotion information via facial emotion recognition. This paper questions whether people are prepared to share personal information such as their own emotion on social media. In the current study we examined how factors such as felt emotion, motivation for sharing on social media as well as personality affected participants’ willingness to share self-reported emotion or facial expression online. By carrying out a GLMM analysis, this study found that participants’ willingness to share self-reported emotion and facial expressions was influenced by their personality traits and the motivation for sharing their emotion information that they were given. From our results we can conclude that the estimated level of privacy for certain emotional information, such as facial expression, is influenced by the motivation for sharing the information online.

Emotion Recognition is of significance in the modern scenario. Among the many ways to perform it, one of them is through facial expression detection since it is a spontaneous arousal of mental state rather than a conscious effort. Sometimes emotions rule us in the form of the choices, actions and perceptions which are in turn, a result of the emotions we are overpowered by. Happiness, sadness, fear, disgust, anger, neutral and surprise are the seven basic emotions expressed by a human most frequently. In this period of automation and human computer interaction, it is a very difficult and tedious job to make the machines detect the emotions. Facial expressions are the medium through which emotions are shown. For one to detect the facial expression of a person, colour, orientation, lighting and posture play significant importance. Hence, the movements associated with eye, nose, lips etc. plays major role in differentiating the facial features. These facial features are then classified and compared through the trained data. In this paper, we have constructed a Convolution Neural Network (CNN) model and then recognised different emotions for a particular dataset. We have found the accuracy of the model and our main aim is to minimise the loss. We have made use of Adam’s optimizer and used loss function as sparse categorical crossentropy and activation function as softmax. The results which we have got are quite accurate and can be used for further research in this field.


2021 ◽  
pp. 1-10
Author(s):  
Daniel T. Burley ◽  
Christopher W. Hobson ◽  
Dolapo Adegboye ◽  
Katherine H. Shelton ◽  
Stephanie H.M. van Goozen

Abstract Impaired facial emotion recognition is a transdiagnostic risk factor for a range of psychiatric disorders. Childhood behavioral difficulties and parental emotional environment have been independently associated with impaired emotion recognition; however, no study has examined the contribution of these factors in conjunction. We measured recognition of negative (sad, fear, anger), neutral, and happy facial expressions in 135 children aged 5–7 years referred by their teachers for behavioral problems. Parental emotional environment was assessed for parental expressed emotion (EE) – characterized by negative comments, reduced positive comments, low warmth, and negativity towards their child – using the 5-minute speech sample. Child behavioral problems were measured using the teacher-informant Strengths and Difficulties Questionnaire (SDQ). Child behavioral problems and parental EE were independently associated with impaired recognition of negative facial expressions specifically. An interactive effect revealed that the combination of both factors was associated with the greatest risk for impaired recognition of negative faces, and in particular sad facial expressions. No relationships emerged for the identification of happy facial expressions. This study furthers our understanding of multidimensional processes associated with the development of facial emotion recognition and supports the importance of early interventions that target this domain.


2017 ◽  
Vol 29 (5) ◽  
pp. 1749-1761 ◽  
Author(s):  
Johanna Bick ◽  
Rhiannon Luyster ◽  
Nathan A. Fox ◽  
Charles H. Zeanah ◽  
Charles A. Nelson

AbstractWe examined facial emotion recognition in 12-year-olds in a longitudinally followed sample of children with and without exposure to early life psychosocial deprivation (institutional care). Half of the institutionally reared children were randomized into foster care homes during the first years of life. Facial emotion recognition was examined in a behavioral task using morphed images. This same task had been administered when children were 8 years old. Neutral facial expressions were morphed with happy, sad, angry, and fearful emotional facial expressions, and children were asked to identify the emotion of each face, which varied in intensity. Consistent with our previous report, we show that some areas of emotion processing, involving the recognition of happy and fearful faces, are affected by early deprivation, whereas other areas, involving the recognition of sad and angry faces, appear to be unaffected. We also show that early intervention can have a lasting positive impact, normalizing developmental trajectories of processing negative emotions (fear) into the late childhood/preadolescent period.


2022 ◽  
Vol 29 (2) ◽  
pp. 1-59
Author(s):  
Joni Salminen ◽  
Sercan Şengün ◽  
João M. Santos ◽  
Soon-Gyo Jung ◽  
Bernard Jansen

There has been little research into whether a persona's picture should portray a happy or unhappy individual. We report a user experiment with 235 participants, testing the effects of happy and unhappy image styles on user perceptions, engagement, and personality traits attributed to personas using a mixed-methods analysis. Results indicate that the participant's perceptions of the persona's realism and pain point severity increase with the use of unhappy pictures. In contrast, personas with happy pictures are perceived as more extroverted, agreeable, open, conscientious, and emotionally stable. The participants’ proposed design ideas for the personas scored more lexical empathy scores for happy personas. There were also significant perception changes along with the gender and ethnic lines regarding both empathy and perceptions of pain points. Implications are the facial expression in the persona profile can affect the perceptions of those employing the personas. Therefore, persona designers should align facial expressions with the task for which the personas will be employed. Generally, unhappy images emphasize realism and pain point severity, and happy images invoke positive perceptions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Paula J. Webster ◽  
Shuo Wang ◽  
Xin Li

Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xiaoxiao Li

In the natural environment, facial and bodily expressions influence each other. Previous research has shown that bodily expressions significantly influence the perception of facial expressions. However, little is known about the cognitive processing of facial and bodily emotional expressions and its temporal characteristics. Therefore, this study presented facial and bodily expressions, both separately and together, to examine the electrophysiological mechanism of emotional recognition using event-related potential (ERP). Participants assessed the emotions of facial and bodily expressions that varied by valence (positive/negative) and consistency (matching/non-matching emotions). The results showed that bodily expressions induced a more positive P1 component and a shortened latency, whereas facial expressions triggered a more negative N170 and prolonged latency. Among N2 and P3, N2 was more sensitive to inconsistent emotional information and P3 was more sensitive to consistent emotional information. The cognitive processing of facial and bodily expressions had distinctive integrating features, with the interaction occurring in the early stage (N170). The results of the study highlight the importance of facial and bodily expressions in the cognitive processing of emotion recognition.


2020 ◽  
Vol 28 (1) ◽  
pp. 97-111
Author(s):  
Nadir Kamel Benamara ◽  
Mikel Val-Calvo ◽  
Jose Ramón Álvarez-Sánchez ◽  
Alejandro Díaz-Morcillo ◽  
Jose Manuel Ferrández-Vicente ◽  
...  

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.


Informatics ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 6 ◽  
Author(s):  
Abdulrahman Alreshidi ◽  
Mohib Ullah

Facial emotion recognition is a crucial task for human-computer interaction, autonomous vehicles, and a multitude of multimedia applications. In this paper, we propose a modular framework for human facial emotions’ recognition. The framework consists of two machine learning algorithms (for detection and classification) that could be trained offline for real-time applications. Initially, we detect faces in the images by exploring the AdaBoost cascade classifiers. We then extract neighborhood difference features (NDF), which represent the features of a face based on localized appearance information. The NDF models different patterns based on the relationships between neighboring regions themselves instead of considering only intensity information. The study is focused on the seven most important facial expressions that are extensively used in day-to-day life. However, due to the modular design of the framework, it can be extended to classify N number of facial expressions. For facial expression classification, we train a random forest classifier with a latent emotional state that takes care of the mis-/false detection. Additionally, the proposed method is independent of gender and facial skin color for emotion recognition. Moreover, due to the intrinsic design of NDF, the proposed method is illumination and orientation invariant. We evaluate our method on different benchmark datasets and compare it with five reference methods. In terms of accuracy, the proposed method gives 13% and 24% better results than the reference methods on the static facial expressions in the wild (SFEW) and real-world affective faces (RAF) datasets, respectively.


2019 ◽  
Vol 29 (10) ◽  
pp. 1441-1451 ◽  
Author(s):  
Melina Nicole Kyranides ◽  
Kostas A. Fanti ◽  
Maria Petridou ◽  
Eva R. Kimonis

AbstractIndividuals with callous-unemotional (CU) traits show deficits in facial emotion recognition. According to preliminary research, this impairment may be due to attentional neglect to peoples’ eyes when evaluating emotionally expressive faces. However, it is unknown whether this atypical processing pattern is unique to established variants of CU traits or modifiable with intervention. This study examined facial affect recognition and gaze patterns among individuals (N = 80; M age = 19.95, SD = 1.01 years; 50% female) with primary vs secondary CU variants. These groups were identified based on repeated measurements of conduct problems, CU traits, and anxiety assessed in adolescence and adulthood. Accuracy and number of fixations on areas of interest (forehead, eyes, and mouth) while viewing six dynamic emotions were assessed. A visual probe was used to direct attention to various parts of the face. Individuals with primary and secondary CU traits were less accurate than controls in recognizing facial expressions across all emotions. Those identified in the low-anxious primary-CU group showed reduced overall fixations to fearful and painful facial expressions compared to those in the high-anxious secondary-CU group. This difference was not specific to a region of the face (i.e. eyes or mouth). Findings point to the importance of investigating both accuracy and eye gaze fixations, since individuals in the primary and secondary groups were only differentiated in the way they attended to specific facial expression. These findings have implications for differentiated interventions focused on improving facial emotion recognition with regard to attending and correctly identifying emotions.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


Sign in / Sign up

Export Citation Format

Share Document