emotion intensity
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 29)

H-INDEX

14
(FIVE YEARS 1)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.


Author(s):  
Ekenechukwu A. Anikpe ◽  
◽  
Ndubuisi Nnanna ◽  
Adebowale O. Adeogun ◽  
Emeka Aniago ◽  
...  

Artistic symbols in many ways act as complimentary narrative tools that elevate and define the message from the artist, which can help to generate efficacious consciousness and mood aggregation in the beholders. The purpose of this study is to deepen the appreciation of the embedded significances of keys as symbolic objects in selected symbolist art by Alex Idoko which represents variously, mystical attributions and significations as understood within different worldviews. Through the application of interpretive discuss approach in relating relevant concepts of symbolism, the study elucidates on the symbolical, mythological, mystical and metaphorical denotations and attributions of chains, padlock and keys in line with Victor Turner’s concept of operational, exegetical and positional meanings. In the end, we observe that the selected work by Idoko subsume deep and dense creative vision projecting deliberate effort in using art as a means of sharing cultural ideas, mystifying aesthetics, propelling curiosity, and mood/emotion intensity.


Children ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 1108
Author(s):  
Koviljka Barisnikov ◽  
Marine Thomasson ◽  
Jennyfer Stutzmann ◽  
Fleur Lejeune

This study assessed two components of face emotion processing: emotion recognition and sensitivity to intensity of emotion expressions and their relation in children age 4 to 12 (N = 216). Results indicated a slower development in the accurate decoding of low intensity expressions compared to high intensity. Between age 4 and 12, children discriminated high intensity expressions better than low ones. The intensity of expression had a stronger impact on overall face expression recognition. High intensity happiness was better recognized than low intensity up to age 11, while children 4 to 12 had difficulties discriminating between high and low intensity sadness. Our results suggest that sensitivity to low intensity expressions acts as a complementary mediator between age and emotion expression recognition, while this was not the case for the recognition of high intensity expressions. These results could help in the development of specific interventions for populations presenting socio-cognitive and emotion difficulties.


2021 ◽  
Author(s):  
Himadri Mukherjee ◽  
Hanan Salam ◽  
Alice Othmani ◽  
K.C. Santosh
Keyword(s):  

Author(s):  
Aditya Dwi Pratama ◽  
Muljono ◽  
Farrikh Al Zami ◽  
Catur Supriyanto ◽  
M.A. Soeleman ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 800
Author(s):  
Jongchan Park ◽  
Min-Hyun Kim ◽  
Dong-Geol Choi

Deep learning-based methods have achieved good performance in various recognition benchmarks mostly by utilizing single modalities. As different modalities contain complementary information to each other, multi-modal based methods are proposed to implicitly utilize them. In this paper, we propose a simple technique, called correspondence learning (CL), which explicitly learns the relationship among multiple modalities. The multiple modalities in the data samples are randomly mixed among different samples. If the modalities are from the same sample (not mixed), then they have positive correspondence, and vice versa. CL is an auxiliary task for the model to predict the correspondence among modalities. The model is expected to extract information from each modality to check correspondence and achieve better representations in multi-modal recognition tasks. In this work, we first validate the proposed method in various multi-modal benchmarks including CMU Multimodal Opinion-Level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) sentiment analysis datasets. In addition, we propose a fraud detection method using the learned correspondence among modalities. To validate this additional usage, we collect a multi-modal dataset for fraud detection using real-world samples for reverse vending machines.


Healthcare ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 113
Author(s):  
Hao Xiong ◽  
Shangbin Lv

Social media is gradually building an online information environment regarding health. This environment is filled with many types of users’ emotions regarding food safety, especially negative emotions that can easily cause panic or anger among the population. However, the mechanisms of how it affects users’ emotions have not been fully studied. Therefore, from the perspective of communication and social psychology, this study uses the content analysis method to analyze factors affecting social media users’ emotions regarding food safety issues. In total, 371 tweet samples of genetically modified food security in Sina Weibo (similar to Twitter) were encoded, measured, and analyzed. The major findings are as follows: (1) Tweet account type, tweet topic, and emotion object were all significantly related to emotion type. Tweet depth and objectivity were both positively affected by emotion type, and objectivity had a greater impact. (2) Account type, tweet topic, and emotion object were all significantly related to emotion intensity. When the depths were the same, emotion intensity became stronger with the decrease in objectivity. (3) Account type, tweet topic, emotion object, and emotion type were all significantly related to a user’s emotion communication capacity. Tweet depth, objectivity, and user’s emotion intensity were positively correlated with emotion communication capacity. Positive emotions had stronger communication capacities than negative ones, which is not consistent with previous studies. These findings help us to understand both theoretically and practically the changes and dissemination of user’s emotions in a food safety and health information environment.


Sign in / Sign up

Export Citation Format

Share Document