scholarly journals “There Is No (Where a) Face Like Home”: Recognition and Appraisal Responses to Masked Facial Dialects of Emotion in Four Different National Cultures

Perception ◽  
2021 ◽  
pp. 030100662110559
Author(s):  
Myron Tsikandilakis ◽  
Zhaoliang Yu ◽  
Leonie Kausel ◽  
Gonzalo Boncompte ◽  
Renzo C. Lanfranco ◽  
...  

The theory of universal emotions suggests that certain emotions such as fear, anger, disgust, sadness, surprise and happiness can be encountered cross-culturally. These emotions are expressed using specific facial movements that enable human communication. More recently, theoretical and empirical models have been used to propose that universal emotions could be expressed via discretely different facial movements in different cultures due to the non-convergent social evolution that takes place in different geographical areas. This has prompted the consideration that own-culture emotional faces have distinct evolutionary important sociobiological value and can be processed automatically, and without conscious awareness. In this paper, we tested this hypothesis using backward masking. We showed, in two different experiments per country of origin, to participants in Britain, Chile, New Zealand and Singapore, backward masked own and other-culture emotional faces. We assessed detection and recognition performance, and self-reports for emotionality and familiarity. We presented thorough cross-cultural experimental evidence that when using Bayesian assessment of non-parametric receiver operating characteristics and hit-versus-miss detection and recognition response analyses, masked faces showing own cultural dialects of emotion were rated higher for emotionality and familiarity compared to other-culture emotional faces and that this effect involved conscious awareness.

2012 ◽  
Vol 43 (3) ◽  
pp. 167-172
Author(s):  
Remigiusz Szczepanowski ◽  
Agata Sobków

The present report examined the hypothesis that two distinct visual routes contribute in processing low and high spatial frequencies of fearful facial expressions. Having the participants presented with a backwardly masked task, we analyzed conscious processing of spatial frequency contents of emotional faces according to both objective and subjective taskrelevant criteria. It was shown that fear perception in the presence of the low-frequency faces can be supported by stronger automaticity leading to less false positives. In contrary, the detection of high-frequency fearful faces was more likely supported by conscious awareness leading to more true positives.


2020 ◽  
Vol 46 (Supplement_1) ◽  
pp. S93-S93
Author(s):  
Irina Falkenberg ◽  
Huai-Hsuan Tseng ◽  
Gemma Modinos ◽  
Barbara Wild ◽  
Philip McGuire ◽  
...  

Abstract Background Studies indicate that people with schizophrenia and first-episode psychosis experience deficits in their ability to accurately detect and display emotions through facial expressions, and that functioning and symptoms are associated with these deficits. This study aims to examine how emotion recognition and facial emotion expression are related to functioning and symptoms in a sample of individuals at ultra-high risk, first-episode psychosis and healthy controls. Methods During fMRI, we combined the presentation of emotional faces with the instruction to react with facial movements predetermined and assigned. 18 patients with first-episode psychosis (FEP), 18 individuals at ultra high risk of psychosis (UHR) and 22 healthy controls (HCs) were examined while viewing happy, sad, or neutral faces and were instructed to simultaneously move the corners of their mouths either (a). upwards or (b). downwards, or (c). to refrain from movement. The subjects’ facial movements were recorded with an MR-compatible video camera. Results Neurofunctional and behavioral response to emotional faces were measured. Analyses have only recently commenced and are ongoing. Full results of the clinical and functional impact of behavioral and neuroimaging results will be presented at the meeting. Discussion Increased knowledge about abnormalities in emotion recognition and behaviour as well as their neural correlates and their impact on clinical measures and functional outcome can inform the development of novel treatment approaches to improve social skills early in the course of schizophrenia and psychotic disorders.


1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


2020 ◽  
Author(s):  
Liuba Papeo ◽  
Etienne Abassi

Detection and recognition of social interactions unfolding in the surroundings is as vital as detection and recognition of faces, bodies, and animate entities in general. We have demonstrated that the visual system is particularly sensitive to a configuration with two bodies facing each other as if interacting. In four experiments using backward masking on healthy adults, we investigated the properties of this dyadic visual representation. We measured the inversion effect (IE), the cost on recognition, of seeing bodies upside-down as opposed to upright, as an index of visual sensitivity: the greater the visual sensitivity, the greater the IE. The IE was increased for facing (vs. nonfacing) dyads, whether the head/face direction was visible or not, which implies that visual sensitivity concerns two bodies, not just two faces/heads. Moreover, the difference in IE for facing vs. nonfacing dyads disappeared when one body was replaced by another object. This implies selective sensitivity to a body facing another body, as opposed to a body facing anything. Finally, the IE was reduced when reciprocity was eliminated (one body faced another but the latter faced away). Thus, the visual system is sensitive selectively to dyadic configurations that approximate a prototypical social exchange with two bodies spatially close and mutually accessible to one another. These findings reveal visual configural representations encompassing multiple objects, which could provide fast and automatic parsing of complex relationships beyond individual faces or bodies.


2020 ◽  
pp. 52-53
Author(s):  
Shao-Min Hung ◽  
Suzy J. Styles ◽  
Po-­Jang Hsieh

The bouba–kiki effect depicts a non-arbitrary mapping between specific shapes and non-words: an angular shape is more often named with a sharp sound like ‘kiki’, while a curved shape is more often matched to a blunter sound like ‘bouba’. This effect shows a natural tendency of sound-shape pairing and has been shown to take place among adults who have different mother tongues (Ramachandran & Hubbard, 2001), pre-schoolers (Maurer, Pathman, & Mondloch, 2006), and even four-month-olds (Ozturk, Krehm, & Vouloumanos, 2013). These studies therefore establish that similar sound-to-shape mappings could happen among different cultures and early in development, suggesting the mappings may be innate and possibly universal. However, it remains unclear what level of mental processing gives rise to these perceptions: the mappings could rely on introspective processes about ‘goodness-of-fit,’ or they could rely on automatic sensory processes which are active prior to conscious awareness. Here we designed several experiments to directly examine the automaticity of the bouba-kiki effect. Specifically, we examined whether the congruency of a sound-shape pair can be processed before access to awareness?


2020 ◽  
Vol 9 (7) ◽  
pp. 2306
Author(s):  
Vilfredo De Pascalis ◽  
Giuliana Cirillo ◽  
Arianna Vecchio ◽  
Joseph Ciorciari

This study explored the electrocortical correlates of conscious and nonconscious perceptions of emotionally laden faces in neurotypical adult women with varying levels of autistic-like traits (Autism Spectrum Quotient—AQ). Event-related potentials (ERPs) were recorded during the viewing of backward-masked images for happy, neutral, and sad faces presented either below (16 ms—subliminal) or above the level of visual conscious awareness (167 ms—supraliminal). Sad compared to happy faces elicited larger frontal-central N1, N2, and occipital P3 waves. We observed larger N1 amplitudes to sad faces than to happy and neutral faces in High-AQ (but not Low-AQ) scorers. Additionally, High-AQ scorers had a relatively larger P3 at the occipital region to sad faces. Regardless of the AQ score, subliminal perceived emotional faces elicited shorter N1, N2, and P3 latencies than supraliminal faces. Happy and sad faces had shorter N170 latency in the supraliminal than subliminal condition. High-AQ participants had a longer N1 latency over the occipital region than Low-AQ ones. In Low-AQ individuals (but not in High-AQ ones), emotional recognition with female faces produced a longer N170 latency than with male faces. N4 latency was shorter to female faces than male faces. These findings are discussed in view of their clinical implications and extension to autism.


2015 ◽  
Vol 22 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Rosario Caballero ◽  
Carita Paradis

This article has two aims: (i) to give an overview of research on sensory perceptions in different disciplines with different aims, and on the basis of that (ii) to encourage new research based on a balanced socio-sensory-cognitive approach. It emphasizes the need to study sensory meanings in human communication, both in Language with a capital L, focusing on universal phenomena, and across different languages, and within Culture with a capital C, such as parts of the world and political regions, and across different cultures, such as markets, production areas and aesthetic activities, in order to stimulate work resulting in more sophisticated, theoretically informed analyses of language use in general, and meaning-making of sensory perceptions in particular.


1987 ◽  
Vol 18 (3) ◽  
pp. 249 ◽  
Author(s):  
D.A. Kobus ◽  
J. Russotti ◽  
C. Schlichting ◽  
G. Haskell ◽  
S. Carpenter ◽  
...  

2021 ◽  
Vol 3 (11) ◽  
Author(s):  
Abhra Chaudhuri ◽  
Palaiahnakote Shivakumara ◽  
Pinaki Nath Chowdhury ◽  
Umapada Pal ◽  
Tong Lu ◽  
...  

Abstract For the video images with complex actions, achieving accurate text detection and recognition results is very challenging. This paper presents a hybrid model for classification of action-oriented video images which reduces the complexity of the problem to improve text detection and recognition performance. Here, we consider the following five categories of genres, namely concert, cooking, craft, teleshopping and yoga. For classifying action-oriented video images, we explore ResNet50 for learning the general pixel-distribution level information and the VGG16 network is implemented for learning the features of Maximally Stable Extremal Regions and again another VGG16 is used for learning facial components obtained by a multitask cascaded convolutional network. The approach integrates the outputs of the three above-mentioned models using a fully connected neural network for classification of five action-oriented image classes. We demonstrated the efficacy of the proposed method by testing on our dataset and two other standard datasets, namely, Scene Text Dataset dataset which contains 10 classes of scene images with text information, and the Stanford 40 Actions dataset which contains 40 action classes without text information. Our method outperforms the related existing work and enhances the class-specific performance of text detection and recognition, significantly. Article highlights The method uses pixel, stable-region and face-component information in a noble way for solving complex classification problems. The proposed work fuses different deep learning models for successful classification of action-oriented images. Experiments on our own dataset as well as standard datasets show that the proposed model outperforms related state-of-the-art (SOTA) methods.


2021 ◽  
Vol 21 ◽  
pp. 221-245
Author(s):  
Joanna Puppel ◽  
Alicja Rozpendowska

Communication process allows people to receive and send messages through verbal and nonverbal resources which play an important role in healthy interpersonal acts. While verbal communication has been the subject of many studies, the present study aims to focus mainly on the nonverbal aspect that is greeting gestures. In this article we shall analyze which greeting gesture, that is widely used across different cultures may evoke a feeling of empathy and thus build peaceful interactions so needed in human communication nowadays.


Sign in / Sign up

Export Citation Format

Share Document