scholarly journals Cross-modal perception (face and voice) in emotions. ERPs and behavioural measures

Author(s):  
Michela Balconi ◽  
Alba Carrera

Emotion decoding constitutes a case of multimodal processing of cues from multiple channels. Previous behavioural and neuropsychological studies indicated that, when we have to decode emotions on the basis of multiple perceptive information, a cross-modal integration has place. The present study investigates the simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), through an ample range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=30) were required to process the stimuli and to indicate their comprehension (by stimpad). ERPs variations and behavioural data (response time, RTs) were submitted to repeated measures analysis of variance (ANOVA). We considered two time intervals (150-250; 250-350 ms post-stimulus), in order to explore the ERP variations. ANOVA showed two different ERP effects, a negative deflection (N2), more anterior-distributed (Fz), and a positive deflection (P2), more posterior-distributed, with different cognitive functions. In the first case N2 may be considered a marker of the emotional content (sensitive to type of emotion), whereas P2 may represent a cross-modal integration marker, it being varied as a function of the congruous/incongruous condition, showing a higher peak for congruous stimuli than incongruous stimuli. Finally, a RT reduction was found for some emotion types for congruous condition (i.e. sadness) and an inverted effect for other emotions (i.e. fear, anger, and surprise).

2021 ◽  
Vol 12 ◽  
Author(s):  
Yutong Liu ◽  
Huini Peng ◽  
Jianhui Wu ◽  
Hongxia Duan

Background: Individuals exposed to childhood maltreatment present with a deficiency in emotional processing in later life. Most studies have focused mainly on childhood physical or sexual abuse; however, childhood emotional abuse, a core issue underlying different forms of childhood maltreatment, has received relatively little attention. The current study explored whether childhood emotional abuse is related to the impaired processing of emotional facial expressions in healthy young men.Methods: The emotional facial processing was investigated in a classical gender discrimination task while the event-related potentials (ERPs) data were collected. Childhood emotional abuse was assessed by a Childhood Trauma Questionnaire (CTQ) among 60 healthy young men. The relationship between the score of emotional abuse and the behavioral and the ERP index of emotional facial expression (angry, disgust, and happy) were explored.Results: Participants with a higher score of childhood emotional abuse responded faster on the behavioral level and had a smaller P2 amplitude on the neural level when processing disgust faces compared to neutral faces.Discussion: Individuals with a higher level of childhood emotional abuse may quickly identify negative faces with less cognitive resources consumed, suggesting altered processing of emotional facial expressions in young men with a higher level of childhood emotional abuse.


2015 ◽  
Vol 26 (04) ◽  
pp. 384-392 ◽  
Author(s):  
Yael Henkin ◽  
Yifat Yaar-Soffer ◽  
Lihi Givon ◽  
Minka Hildesheimer

Background: Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. Purpose: To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. Research Design: The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Study Sample: Fifteen (21–32 yr) young adults (6 females) with normal hearing sensitivity. Data Collection and Analysis: By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Results: Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of monaural responses for N1-BIC, P3-BIC, and LNC-BIC, respectively. Binaural interaction manifested in a decrease of the binaural response compared to the sum of monaural responses. Furthermore, listening condition affected P3 latency only, whereas laterality effects manifested in enhanced N1 amplitudes at the left (T3) vs. right (T4) scalp electrode and in a greater left–right amplitude difference in the right compared to left listening condition. Conclusions: The current AERP data provides evidence for the occurrence of cortical BICs during perceptual and postperceptual stages, presumably reflecting ongoing integration of information presented to the two ears at the final stages of auditory processing. Increasing binaural interaction with the progression of the auditory processing sequence (N1 to LNC) may support the notion that cortical BICs reflect inherited interactions from preceding stages of upstream processing together with discrete cortical neural activity involved in binaural processing. Clinically, an objective measure of cortical binaural processing has the potential of becoming an appealing neural correlate of binaural behavioral performance.


Author(s):  
Michela Balconi ◽  
Serafino Tutino

The aim of the study is to explore the iconic representation of frozen metaphor. Starting from the dichotomy between the pragmatic models, for which metaphor is a semantic anomaly, and the direct access models, where metaphor is seen as similar to literal language, the cognitive and linguistic processes involved in metaphor comprehension are analyzed using behavioural data (RTs) and neuropsychological indexes (ERPs). 36 subjects listened to 160 sentences equally shared in the variables content (metaphorical vs literal) and congruousness (anomalous vs not semantically anomalous). The ERPs analysis showed two negative deflections (N3-N4 complex), that indicated different cognitive processes involved in sentence comprehension. Repeated measures ANOVA, applied to peak amplitude and latency variables, suggested in fact N4 as index of semantic anomaly (incongruous stimuli), more localized in posterior (Pz) area, while N3 was sensitive to the content variable: metaphor sentences had an ampler deflection than literal ones and posteriorly distributed (Oz). Adding this results with behavioral data (no differences for metaphor vs literal), it seems that the difference between metaphorical and literal decoding isn’t for the cognitive complexity of decoding (direct or indirect access), but for its representation format, which is more iconic for metaphor (as N3 suggests).


2019 ◽  
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Bettina Gathmann ◽  
robert.moeck ◽  
thomas straube

Emotional facial expressions lead to modulations of early event-related potentials (ERPs). However, it has so far remained unclear in how far these modulations represent face-specific effects rather than differences in low-level visual features, and to which extent they depend on available processing resources. To examine these questions, we conducted two preregistered independent experiments (N = 40 in each experiment) using different variants of a novel task which manipulates peripheral perceptual load across levels but keeps overall visual stimulation constant. Centrally, task-irrelevant angry, neutral and happy faces and their Fourier phase-scrambled versions, which preserved low-level visual features, were presented. The results of both studies showed load-independent P1 and N170 emotion effects. Importantly, we could confirm by using Bayesian analyses that these emotion effects were face-independent for the P1 but not for the N170 component. We conclude that firstly, ERP modulations during the P1 interval strongly depend on low-level visual information, while the emotional N170 modulation requires the processing of figural facial features. Secondly, both P1 and N170 modulations appear to be immune to a large range of variations in perceptual load.


2021 ◽  
pp. 030573562097869
Author(s):  
Alice Mado Proverbio ◽  
Francesca Russo

We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response ( p = .032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.


Author(s):  
Alice M Proverbio

Abstract A well-established neuroimaging literature predicts a right-sided asymmetry in the activation of face-devoted areas such as the fusiform gyrus (FG) and its resulting M/N170 response during face processing. However, the face-related response sometimes appears to be bihemispheric. A few studies have argued that bilaterality depended on the sex composition of the sample. To shed light on this matter, two meta-analyses were conducted starting from a large initial database of 250 ERP (Event-related potentials)/MEG (Magnetoencephalography) peer-reviewed scientific articles. Paper coverage was from 1985 to 2020. Thirty-four articles met the inclusion criteria of a sufficiently large and balanced sample size with strictly right-handed and healthy participants aged 18–35 years and N170 measurements in response to neutral front view faces at left and right occipito/temporal sites. The data of 817 male (n = 414) and female (n = 403) healthy adults were subjected to repeated-measures analyses of variance. The results of statistical analyses from the data of 17 independent studies (from Asia, Europe and America) seem to robustly indicate the presence of a sex difference in the way the two cerebral hemispheres process facial information in humans, with a marked right-sided asymmetry of the bioelectrical activity in males and a bilateral or left-sided activity in females.


Sign in / Sign up

Export Citation Format

Share Document