scholarly journals Multimodal recognition of emotions in music and language

2021 ◽  
pp. 030573562097869
Author(s):  
Alice Mado Proverbio ◽  
Francesca Russo

We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response ( p = .032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.

2006 ◽  
Vol 18 (5) ◽  
pp. 689-700 ◽  
Author(s):  
M. Sabri ◽  
E. Liebenthal ◽  
E. J. Waldron ◽  
D. A. Medler ◽  
J. R. Binder

Little is known about the neural mechanisms that control attentional modulation of deviance detection in the auditory modality. In this study, we manipulated the difficulty of a primary task to test the relation between task difficulty and the detection of infrequent, task-irrelevant deviant (D) tones (1300 Hz) presented among repetitive standard (S) tones (1000 Hz). Simultaneous functional magnetic resonance imaging (fMRI)/event-related potentials (ERPs) were recorded from 21 subjects performing a two-alternative forced-choice duration discrimination task (short and long tones of equal probability). The duration of the short tone was always 50 msec. The duration of the long tone was 100 msec in the easy task and 60 msec in the difficult task. As expected, response accuracy decreased and response time (RT) increased in the difficult compared with the easy task. Performance was also poorer for D than for S tones, indicating distraction by task-irrelevant frequency information on trials involving D tones. In the difficult task, an amplitude increase was observed in the difference waves for N1 and P3a, ERP components associated with increased attention to deviant sounds. The mismatch negativity (MMN) response, associated with passive deviant detection, was larger in the easy task, demonstrating the susceptibility of this component to attentional manipulations. The fMRI contrast D > S in the difficult task revealed activation on the right superior temporal gyrus (STG) and extending ventrally into the superior temporal sulcus, suggesting this region's involvement in involuntary attention shifting toward unattended, infrequent sounds. Conversely, passive deviance detection, as reflected by the MMN, was associated with more dorsal activation on the STG. These results are consistent with the view that the dorsal STG region is responsive to mismatches between the memory trace of the standard and the incoming deviant sound, whereas the ventral STG region is activated by involuntary shifts of attention to task-irrelevant auditory features.


2021 ◽  
Vol 11 (1) ◽  
pp. 48
Author(s):  
John Stein

(1) Background—the magnocellular hypothesis proposes that impaired development of the visual timing systems in the brain that are mediated by magnocellular (M-) neurons is a major cause of dyslexia. Their function can now be assessed quite easily by analysing averaged visually evoked event-related potentials (VERPs) in the electroencephalogram (EEG). Such analysis might provide a useful, objective biomarker for diagnosing developmental dyslexia. (2) Methods—in adult dyslexics and normally reading controls, we recorded steady state VERPs, and their frequency content was computed using the fast Fourier transform. The visual stimulus was a black and white checker board whose checks reversed contrast every 100 ms. M- cells respond to this stimulus mainly at 10 Hz, whereas parvocells (P-) do so at 5 Hz. Left and right visual hemifields were stimulated separately in some subjects to see if there were latency differences between the M- inputs to the right vs. left hemispheres, and these were compared with the subjects’ handedness. (3) Results—Controls demonstrated a larger 10 Hz than 5 Hz fundamental peak in the spectra, whereas the dyslexics showed the reverse pattern. The ratio of subjects’ 10/5 Hz amplitudes predicted their reading ability. The latency of the 10 Hz peak was shorter during left than during right hemifield stimulation, and shorter in controls than in dyslexics. The latter correlated weakly with their handedness. (4) Conclusion—Steady state visual ERPs may conveniently be used to identify developmental dyslexia. However, due to the limited numbers of subjects in each sub-study, these results need confirmation.


2002 ◽  
Vol 13 (01) ◽  
pp. 001-013 ◽  
Author(s):  
James Jerger ◽  
Rebecca Estes

We studied auditory evoked responses to the apparent movement of a burst of noise in the horizontal plane. Event-related potentials (ERPs) were measured in three groups of participants: children in the age range from 9 to 12 years, young adults in the age range from 18 to 34 years, and seniors in the age range from 65 to 80 years. The topographic distribution of grand-averaged ERP activity was substantially greater over the right hemisphere in children and seniors but slightly greater over the left hemisphere in young adults. This finding may be related to age-related differences in the extent to which judgments of sound movement are based on displacement versus velocity information.


2005 ◽  
Vol 19 (3) ◽  
pp. 204-215 ◽  
Author(s):  
Thierry Baccino ◽  
Yves Manunta

Abstract. This paper presents a new methodology for studying cognition, which combines eye movements (EM) and event-related potentials (ERP) to track the cognitive processes that occur during a single eye fixation. This technique, called eye-fixation-related potentials (EFRP), has the advantage of coupling accurate time measures from ERPs and the location of the eye on the stimulus, so it can be used to disentangle perceptual/attentional/cognitive factors affecting reading. We tested this new technique to describe the controversial parafoveal-on-foveal effects on reading, which concern the question of whether two consecutive words are processed in parallel or sequentially. The experiment directly addressed this question by looking at whether semantic relatedness on a target word in a reading-like situation might affect the processing of a prime word. Three pair-word conditions were tested: A semantically associated target word (horse-mare), a semantically nonassociated target word (horse-table) and a nonword (horse-twsui); EFRPs were compared for all conditions. The results revealed that early ERP components differentiated word and nonword processing within 119 ms postfixation (N1 component). Moreover, the amplitude of the right centrofrontal P140 varied as a function of word type, being larger in response to nonassociated words than to nonwords. This component might index a spatial attention shift to the target word and its visual categorization, being highly sensitive to orthographic regularity and “ill-formedness” of words. The P2 consecutive component (peaking at 215 ms) differentiated associated words and nonassociated words, which can account for the semantic parafoveal effect. The EFRP technique, therefore, appears to be fruitful for establishing a time-line of early cognitive processes during reading.


1999 ◽  
Vol 11 (6) ◽  
pp. 598-609 ◽  
Author(s):  
Charan Ranganath ◽  
Ken A. Paller

Previous neuropsychological and neuroimaging results have implicated the prefrontal cortex in memory retrieval, although its precise role is unclear. In the present study, we examined patterns of brain electrical activity during retrieval of episodic and semantic memories. In the episodic retrieval task, participants retrieved autobiographical memories in response to event cues. In the semantic retrieval task, participants generated exemplars in response to category cues. Novel sounds presented intermittently during memory retrieval elicited a series of brain potentials including one identifiable as the P3a potential. Based on prior research linking P3a with novelty detection and with the frontal lobes, we predicted that P3a would be reduced to the extent that novelty detection and memory retrieval interfere with each other. Results during episodic and semantic retrieval tasks were compared to results during a task in which subjects attended to the auditory stimuli. P3a amplitudes were reduced during episodic retrieval, particularly at right lateral frontal scalp locations. A similar but less lateralized pattern of frontal P3a reduction was observed during semantic retrieval. These findings support the notion that the right prefrontal cortex is engaged in the service of memory retrieval, particularly for episodic memories.


2019 ◽  
Author(s):  
Rémy Masson ◽  
Yohana Lévêque ◽  
Geneviève Demarquay ◽  
Hesham ElShafei ◽  
Lesly Fornoni ◽  
...  

AbstractObjectivesTo evaluate alterations of top-down and/or bottom-up attention in migraine and their cortical underpinnings.Methods19 migraineurs between attacks and 19 matched control participants performed a task evaluating jointly top-down and bottom-up attention, using visually-cued target sounds and unexpected task-irrelevant distracting sounds. Behavioral responses and MEG/EEG were recorded. Event-related potentials and fields (ERPs/ERFs) were processed and source reconstruction was applied to ERFs.ResultsAt the behavioral level, neither top-down nor bottom-up attentional processes appeared to be altered in migraine. However, migraineurs presented heightened evoked responses following distracting sounds (orienting component of the N1 and Re-Orienting Negativity, RON) and following target sounds (orienting component of the N1), concomitant to an increased recruitment of the right temporo-parietal junction. They also displayed an increased effect of the cue informational value on target processing resulting in the elicitation of a negative difference (Nd).ConclusionsMigraineurs appear to display increased bottom-up orienting response to all incoming sounds, and an enhanced recruitment of top-down attention.SignificanceThe interictal state in migraine is characterized by an exacerbation of the orienting response to attended and unattended sounds. These attentional alterations might participate to the peculiar vulnerability of the migraine brain to all incoming stimuli.HighlightsMigraineurs performed as well as healthy participants in an attention task.However, EEG markers of both bottom-up and top-down attention are increased.Migraine is also associated with a facilitated recruitment of the right temporo-parietal junction.


2020 ◽  
Author(s):  
Chenglong Cao ◽  
Jian Song ◽  
Binbin Liu ◽  
Jianren Yue ◽  
Yuzhao Lu ◽  
...  

Abstract Background: Cognitive impairments have been reported in patients with pituitary adenoma; however, there is a lack of knowledge of investigating the emotional stimuli processing in pituitary patients. Thus, we aimed to investigate whether there is emotional processing dysfunction in pituitary patients by recording and analyzing the late positive potential (LPP) elicited by affective stimuli.Methods: Evaluation of emotional stimuli processing by LPP Event related potentials (ERPs) was carried out through central- parietal electrode sites (C3, Cz, C4, P3, Pz, P4) on the head of the patients and healthy controls (HCs).Results: In the negative stimuli, the amplitude of LPP was 2.435 ± 0.419μV for HCs and 0.656 ± 0.427μV for patient group respectively ( p = 0.005). In the positive stimuli, the elicited electric potential 1.450 ± 0.316μV for HCs and 0.495 ± 0.322μV for patient group respectively ( p = 0.040). Moreover, the most obvious difference of LPP amplitude between the two groups existed in the right parietal region. On the right hemisphere (at the P4 site), the elicited electric potential was 1.993 ± 0.299μV for HCs and 0.269 ± 0.305μV for patient group respectively( p = 0.001).Conclusion: There are functional dysfunction of emotional stimuli processing in pituitary adenoma patients. Our research provides the electrophysiological evidence for the presence of cognitive dysfunction which need to be intervened in the pituitary adenoma patients.


2019 ◽  
Vol 9 (12) ◽  
pp. 362
Author(s):  
Antonia M. Karellas ◽  
Paul Yielder ◽  
James J. Burkitt ◽  
Heather S. McCracken ◽  
Bernadette A. Murphy

Multisensory integration (MSI) is necessary for the efficient execution of many everyday tasks. Alterations in sensorimotor integration (SMI) have been observed in individuals with subclinical neck pain (SCNP). Altered audiovisual MSI has previously been demonstrated in this population using performance measures, such as reaction time. However, neurophysiological techniques have not been combined with performance measures in the SCNP population to determine differences in neural processing that may contribute to these behavioral characteristics. Electroencephalography (EEG) event-related potentials (ERPs) have been successfully used in recent MSI studies to show differences in neural processing between different clinical populations. This study combined behavioral and ERP measures to characterize MSI differences between healthy and SCNP groups. EEG was recorded as 24 participants performed 8 blocks of a simple reaction time (RT) MSI task, with each block consisting of 34 auditory (A), visual (V), and audiovisual (AV) trials. Participants responded to the stimuli by pressing a response key. Both groups responded fastest to the AV condition. The healthy group demonstrated significantly faster RTs for the AV and V conditions. There were significant group differences in neural activity from 100–140 ms post-stimulus onset, with the control group demonstrating greater MSI. Differences in brain activity and RT between individuals with SCNP and a control group indicate neurophysiological alterations in how individuals with SCNP process audiovisual stimuli. This suggests that SCNP alters MSI. This study presents novel EEG findings that demonstrate MSI differences in a group of individuals with SCNP.


2020 ◽  
Vol 25 (5) ◽  
pp. 237-248
Author(s):  
Maojin Liang ◽  
Jiahao Liu ◽  
Yuexin Cai ◽  
Fei Zhao ◽  
Suijun Chen ◽  
...  

Objective: The present study investigated the characteristics of visual processing in the auditory-associated cortex in adults with hearing loss using event-related potentials. Methods: Ten subjects with bilateral postlingual hearing loss were recruited. Ten age- and sex-matched normal-hearing subjects were included as controls. Visual (“sound” and “non-sound” photos)-evoked potentials were performed. The P170 response in the occipital area as well as N1 and N2 responses in FC3 and FC4 were analyzed. Results: Adults with hearing loss had higher P170 amplitudes, significantly higher N2 amplitudes, and shorter N2 latency in response to “sound” and “non-sound” photo stimuli at both FC3 and FC4, with the exception of the N2 amplitude which responded to “sound” photo stimuli at FC3. Further topographic mapping analysis revealed that patients had a large difference in response to “sound” and “non-sound” photos in the right frontotemporal area, starting from approximately 200 to 400 ms. Localization of source showed the difference to be located in the middle frontal gyrus region (BA10) at around 266 ms. Conclusions: The significantly stronger responses to visual stimuli indicate enhanced visual processing in the auditory-associated cortex in adults with hearing loss, which may be attributed to cortical visual reorganization involving the right frontotemporal cortex.


Sign in / Sign up

Export Citation Format

Share Document