scholarly journals Defensive functions provoke similar psychophysiological reactions in reaching and comfort spaces

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
G. Ruggiero ◽  
M. Rapuano ◽  
A. Cartaud ◽  
Y. Coello ◽  
T. Iachini

AbstractThe space around the body crucially serves a variety of functions, first and foremost, preserving one’s own safety and avoiding injury. Recent research has shown that emotional information, in particular threatening facial expressions, affects the regulation of peripersonal-reaching space (PPS, for action with objects) and interpersonal-comfort space (IPS, for social interaction). Here we explored if emotional facial expressions may similarly or differently affect both spaces in terms of psychophysiological reactions (cardiac inter-beat intervals: IBIs, i.e. inverse of heart rate; Skin Conductance Response amplitude: SCR amplitude) and spatial distance. Through Immersive Virtual Reality technology, participants determined reaching-distance (PPS) and comfort-distance (IPS) from virtual confederates exhibiting happy/angry/neutral facial expressions while being approached by them. During these interactions, spatial distance and psychophysiological reactions were recorded. Results revealed that when interacting with angry virtual confederates the distance increased similarly in both comfort-social and reaching-action spaces. Moreover, interacting with virtual confederates exhibiting angry rather than happy or neutral expressions provoked similar psychophysiological activations (SCR amplitude, IBIs) in both spaces. Regression analyses showed that psychophysiological activations, particularly SCR amplitude in response to virtual confederates approaching with angry expressions, were able to predict the increase of PPS and IPS. These findings suggest that self-protection functions could be the expression of a common defensive mechanism shared by social and action spaces.

Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2021 ◽  
Vol 11 (9) ◽  
pp. 1203 ◽  
Author(s):  
Sara Borgomaneri ◽  
Francesca Vitale ◽  
Simone Battaglia ◽  
Alessio Avenanti

The ability to rapidly process others’ emotional signals is crucial for adaptive social interactions. However, to date it is still unclear how observing emotional facial expressions affects the reactivity of the human motor cortex. To provide insights on this issue, we employed single-pulse transcranial magnetic stimulation (TMS) to investigate corticospinal motor excitability. Healthy participants observed happy, fearful and neutral pictures of facial expressions while receiving TMS over the left or right motor cortex at 150 and 300 ms after picture onset. In the early phase (150 ms), we observed an enhancement of corticospinal excitability for the observation of happy and fearful emotional faces compared to neutral expressions specifically in the right hemisphere. Interindividual differences in the disposition to experience aversive feelings (personal distress) in interpersonal emotional contexts predicted the early increase in corticospinal excitability for emotional faces. No differences in corticospinal excitability were observed at the later time (300 ms) or in the left M1. These findings support the notion that emotion perception primes the body for action and highlights the role of the right hemisphere in implementing a rapid and transient facilitatory response to emotional arousing stimuli, such as emotional facial expressions.


2019 ◽  
Author(s):  
Jacob Israelashvlili

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognizing others’ emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N = 363), we found that individuals high in emotion differentiation were more accurate in recognizing others’ emotional facial expressions. Study 2 (N = 217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.


2021 ◽  
Author(s):  
Christian Mancini ◽  
Luca Falciati ◽  
Claudio Maioli ◽  
Giovanni Mirabella

The ability to generate appropriate responses, especially in social contexts, requires integrating emotional information with ongoing cognitive processes. In particular, inhibitory control plays a crucial role in social interactions, preventing the execution of impulsive and inappropriate actions. In this study, we focused on the impact of facial emotional expressions on inhibition. Research in this field has provided highly mixed results. In our view, a crucial factor explaining such inconsistencies is the task-relevance of the emotional content of the stimuli. To clarify this issue, we gave two versions of a Go/No-go task to healthy participants. In the emotional version, participants had to withhold a reaching movement at the presentation of emotional facial expressions (fearful or happy) and move when neutral faces were shown. The same pictures were displayed in the other version, but participants had to act according to the actor's gender, ignoring the emotional valence of the faces. We found that happy expressions impaired inhibitory control with respect to fearful expressions, but only when they were relevant to the participants' goal. We interpret these results as suggesting that facial emotions do not influence behavioral responses automatically. They would instead do so only when they are intrinsically germane for ongoing goals.


2021 ◽  
Vol 43 (2) ◽  
pp. 179-198
Author(s):  
Klaus Schwarzfischer

Summary Methodological problems often arise when a special case is confused with the general principle. So you will find affordances only for ‚artifacts’ if you restrict the analysis to ‚artifacts’. The general principle, however, is an ‚invitation character’, which triggers an action. Consequently, an action-theoretical approach known as ‚pragmatic turn’ in cognitive science is recommended. According to this approach, the human being is not a passive-receptive being but actively produces those action effects that open up the world to us (through ‚active inferences’). This ‚ideomotor approach’ focuses on the so-called ‚epistemic actions’, which guide our perception as conscious and unconscious cognitions. Due to ‚embodied cognition’ the own body is assigned an indispensable role. The action theoretical approach of ‚enactive cognition’ enables that every form can be consistently processualized. Thus, each ‚Gestalt’ is understood as the process result of interlocking cognitions of ‚forward modelling’ (which produces anticipations and enables prognoses) and ‚inverse modelling’ (which makes hypotheses about genesis and causality). As can be shown, these cognitions are fed by previous experiences of real interaction, which later changes into a mental trial treatment, which is highly automated and can therefore take place unconsciously. It is now central that every object may have such affordances that call for instrumental or epistemic action. In the simplest case, it is the body and the facial expressions of our counterpart that can be understood as a question and provoke an answer/reaction. Thus, emotion is not only to be understood as expression/output according to the scheme ‚input-processing-output’, but acts itself as a provocative act/input. Consequently, artifacts are neither necessary nor sufficient conditions for affordances. Rather, they exist in all areas of cognition—from Enactive Cognition to Social Cognition.


2020 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Jack Dylan Moore ◽  
Sarah Hendry ◽  
Felicity Wolohan

The emotional basis of cognitive control has been investigated in the flanker task with various procedures and materials across different studies. The present study examined the issue with the same flanker task but with different types of emotional stimuli and design. In seven experiments, the flanker effect and its sequential modulation according to the preceding trial type were assessed. Experiments 1 and 2 used affective pictures and emotional facial expressions as emotional stimuli, and positive and negative stimuli were intermixed. There was little evidence that emotional stimuli influenced cognitive control. Experiments 3 and 4 used the same affective pictures and facial expressions, but positive and negative stimuli were separated between different participant groups. Emotional stimuli reduced the flanker effect as well as its sequential modulation regardless of valence. Experiments 5 and 6 used affective pictures but manipulated arousal and valence of stimuli orthogonally The results did not replicate the reduced flanker effect or sequential modulation by valence, nor did they show consistent effects of arousal. Experiment 7 used a mood induction technique and showed that sequential modulation was positively correlated with valence rating (the higher the more positive) but was negatively correlated with arousal rating. These results are inconsistent with several previous findings and are difficult to reconcile within a single theoretical framework, confirming an elusive nature of the emotional basis of cognitive control in the flanker task.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ami Cohen ◽  
Kfir Asraf ◽  
Ivgeny Saveliev ◽  
Orrie Dan ◽  
Iris Haimov

AbstractThe ability to recognize emotions from facial expressions is essential to the development of complex social cognition behaviors, and impairments in this ability are associated with poor social competence. This study aimed to examine the effects of sleep deprivation on the processing of emotional facial expressions and nonfacial stimuli in young adults with and without attention-deficit/hyperactivity disorder (ADHD). Thirty-five men (mean age 25.4) with (n = 19) and without (n = 16) ADHD participated in the study. During the five days preceding the experimental session, the participants were required to sleep at least seven hours per night (23:00/24:00–7:00/9:00) and their sleep was monitored via actigraphy. On the morning of the experimental session, the participants completed a 4-stimulus visual oddball task combining facial and nonfacial stimuli, and repeated it after 25 h of sustained wakefulness. At baseline, both study groups had poorer performance in response to facial rather than non-facial target stimuli on all indices of the oddball task, with no differences between the groups. Following sleep deprivation, rates of omission errors, commission errors and reaction time variability increased significantly in the ADHD group but not in the control group. Time and target type (face/non-face) did not have an interactive effect on any indices of the oddball task. Young adults with ADHD are more sensitive to the negative effects of sleep deprivation on attentional processes, including those related to the processing of emotional facial expressions. As poor sleep and excessive daytime sleepiness are common in individuals with ADHD, it is feasible that poor sleep quality and quantity play an important role in cognitive functioning deficits, including the processing of emotional facial expressions that are associated with ADHD.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sign in / Sign up

Export Citation Format

Share Document