scholarly journals Psychological functioning in survivors of COVID-19: Evidence from recognition of fearful facial expressions

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254438
Author(s):  
Federica Scarpina ◽  
Marco Godi ◽  
Stefano Corna ◽  
Ionathan Seitanidis ◽  
Paolo Capodaglio ◽  
...  

Evidence about the psychological functioning in individuals who survived the COVID-19 infectious is still rare in the literature. In this paper, we investigated fearful facial expressions recognition, as a behavioural means to assess psychological functioning. From May 15th, 2020 to January 30th, 2021, we enrolled sixty Italian individuals admitted in multiple Italian COVID-19 post-intensive care units. The detection and recognition of fearful facial expressions were assessed through an experimental task grounded on an attentional mechanism (i.e., the redundant target effect). According to the results, our participants showed an altered behaviour in detecting and recognizing fearful expressions. Specifically, their performance was in disagreement with the expected behavioural effect. Our study suggested altered processing of fearful expressions in individuals who survived the COVID-19 infectious. Such a difficulty might represent a crucial sign of psychological distress and it should be addressed in tailored psychological interventions in rehabilitative settings and after discharge.

2021 ◽  
Author(s):  
Hongxiang Gao ◽  
Shan An ◽  
Jianqing Li ◽  
Chengyu Liu

2020 ◽  
Vol 9 (3) ◽  
pp. 1208-1219
Author(s):  
Hendra Kusuma ◽  
Muhammad Attamimi ◽  
Hasby Fahrudin

In general, a good interaction including communication can be achieved when verbal and non-verbal information such as body movements, gestures, facial expressions, can be processed in two directions between the speaker and listener. Especially the facial expression is one of the indicators of the inner state of the speaker and/or the listener during the communication. Therefore, recognizing the facial expressions is necessary and becomes the important ability in communication. Such ability will be a challenge for the visually impaired persons. This fact motivated us to develop a facial recognition system. Our system is based on deep learning algorithm. We implemented the proposed system on a wearable device which enables the visually impaired persons to recognize facial expressions during the communication. We have conducted several experiments involving the visually impaired persons to validate our proposed system and the promising results were achieved.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 259-259 ◽  
Author(s):  
C A Marzi ◽  
G Nitro ◽  
M Prior

We measured the duration of central visual persistence by testing normal subjects for the redundant target effect (RTE), ie the speeding up of reaction time to redundant visual stimuli in comparison to similar single stimuli. Brief LED-generated flashes were presented to normal subjects either singly or in a pair at peripheral visual field locations (5 or 30 deg along the horizontal meridian). Stimulus pairs could appear either in the same hemifield at different locations or in opposite hemifields with a stimulus onset asynchrony (SOA) ranging between 0 and 100 ms. The subject's task was to press a key as soon as possible following the appearance of either a single stimulus or of the first stimulus in a pair. We found a robust and consistent overall RTE with double stimuli yielding faster RTs than single stimuli for both intrafield and interfield presentations. The effect decreased significantly from 0 ms to 40 ms SOA and at longer SOAs the speed of response to stimulus pairs was indistinguishable from that to a single stimulus. We believe that the longest SOA compatible with a reliable RTE (40 ms) reflects the duration of central persistence. Evoked-potential evidence gathered in our laboratory suggests that the locus of such persistence may be the extrastriate visual cortex.


Sign in / Sign up

Export Citation Format

Share Document