scholarly journals Causal relationship between the right auditory cortex and speech-evoked frequency-following response: Evidence from combined tDCS and EEG

2020 ◽  
Author(s):  
Guangting Mai ◽  
Peter Howell

AbstractSpeech-evoked frequency-following response (FFR) reflects the neural encoding of speech periodic information in the human auditory systems. FFR is of fundamental importance for pitch and speech perception and serves as clinical biomarkers for various auditory and language disorders. While it is suggested that the main neural source of FFR is in the auditory brainstem, recent studies have shown a cortical contribution to FFR predominantly in the right hemisphere. However, it is still unclear whether auditory cortex and FFR are causally related. The aim of this study was to establish this causal relationship using a combination of transcranial direct current stimulation (tDCS) and scalp-recorded electroencephalography (EEG). We applied tDCS over the left and right auditory cortices in right-handed normal-hearing participants and examined the after-effects of tDCS on FFR using EEG during monaural listening to a repeatedly-presented speech syllable. Our results showed that: (1) before tDCS was applied, participants had greater FFR magnitude when they listened to speech from the left than the right ear, illustrating right-lateralized hemispheric asymmetry for FFR; (2) anodal and cathodal tDCS applied over the right, but not left, auditory cortex significantly changed FFR magnitudes compared to the sham stimulation; specifically, such after-effects occurred only when participants listened to speech from the left ear, emphasizing the right auditory cortical contributions along the contralateral pathway. The current finding thus provides the first causal evidence that validates the relationship between the right auditory cortex and speech-evoked FFR and should significantly extend our understanding of speech encoding in the brain.Significance StatementSpeech-evoked frequency-following response (FFR) is a neural activity that reflects the brain’s encoding of speech periodic features. The FFR has great fundamental and clinical importance for auditory processing. Whilst convention maintains that FFR derives mainly from the brainstem, it has been argued recently that there are additional contributions to FFR from the auditory cortex. Using a combination of tDCS, that altered neural excitability of auditory cortices, and EEG recording, the present study provided the first evidence to validate a causal relationship between the right auditory cortex and speech-evoked FFR. The finding supports the right-asymmetric auditory cortical contributions to processing of speech periodicity and advances our understanding of how speech signals are encoded and analysed along the central auditory pathways.

2018 ◽  
Author(s):  
Bratislav Mišić ◽  
Richard F. Betzel ◽  
Alessandra Griffa ◽  
Marcel A. de Reus ◽  
Ye He ◽  
...  

Converging evidence from activation, connectivity and stimulation studies suggests that auditory brain networks are lateralized. Here we show that these findings can be at least partly explained by the asymmetric network embedding of the primary auditory cortices. Using diffusion-weighted imaging in three independent datasets, we investigate the propensity for left and right auditory cortex to communicate with other brain areas by quantifying the centrality of the auditory network across a spectrum of communication mechanisms, from shortest path communication to diffusive spreading. Across all datasets, we find that the right auditory cortex is better integrated in the connectome, facilitating more efficient communication with other areas, with much of the asymmetry driven by differences in communication pathways to the opposite hemisphere. Critically, the primacy of the right auditory cortex emerges only when communication is conceptualized as a diffusive process, taking advantage of more than just the topologically shortest paths in the network. Altogether, these results highlight how the network configuration and embedding of a particular region may contribute to its functional lateralization.


2016 ◽  
Author(s):  
Emily B.J. Coffey ◽  
Gabriella Musacchia ◽  
Robert J. Zatorre

AbstractThe frequency following response (FFR) is a measure of the brain’s periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using magnetoencephalography (MEG) suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016c). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality, and document the relationships between the FFR and the onset response, and cortical activity. Using a combination of electroencephalography, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-f0 strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution, and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function.Significance StatementThe frequency following response (FFR) is an electroencephalograph signal that is used to explore how the auditory system encodes temporal regularities in sound, and which is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical source are more controversial. Here, we use functional MRI to validate and extend the prediction from magnetoencephalography data of a right auditory cortex contribution to the FFR. We also demonstrate a dissociation between FFR-related cortical activity from that related to the latency of the response to sound onset, which is found in left auditory cortex. The findings provide a clearer picture of cortical processes for analysis of sound features.


2005 ◽  
Vol 17 (10) ◽  
pp. 1519-1531 ◽  
Author(s):  
Kerstin Sander ◽  
Henning Scheich

Evidence suggests that in animals their own species-specific communication sounds are processed predominantly in the left hemisphere. In contrast, processing linguistic aspects of human speech involves the left hemisphere, whereas processing some prosodic aspects of speech as well as other not yet well-defined attributes of human voices predominantly involves the right hemisphere. This leaves open the question of hemispheric processing of universal (species-specific) human vocalizations that are more directly comparable to animal vocalizations. The present functional magnetic resonance imaging study addresses this question. Twenty subjects listened to human laughing and crying presented either in an original or time-reversed version while performing a pitch-shift detection task to control attention. Time-reversed presentation of these sounds is a suitable auditory control because it does not change the overall spectral content. The auditory cortex, amygdala, and insula in the left hemisphere were more strongly activated by original than by time-reversed laughing and crying. Thus, similar to speech, these nonspeech vocalizations involve predominantly left-hemisphere auditory processing. Functional data suggest that this lateralization effect is more likely based on acoustical similarities between speech and laughing or crying than on similarities with respect to communicative functions. Both the original and time-reversed laughing and crying activated more strongly the right insula, which may be compatible with its assumed function in emotional self-awareness.


2002 ◽  
Vol 87 (1) ◽  
pp. 423-433 ◽  
Author(s):  
André Brechmann ◽  
Frank Baumgart ◽  
Henning Scheich

Recognition of sound patterns must be largely independent of level and of masking or jamming background sounds. Auditory patterns of relevance in numerous environmental sounds, species-specific vocalizations and speech are frequency modulations (FM). Level-dependent activation of the human auditory cortex (AC) in response to a large set of upward and downward FM tones was studied with low-noise (48 dB) functional magnetic resonance imaging at 3 Tesla. Separate analysis in four territories of AC was performed in each individual brain using a combination of anatomical landmarks and spatial activation criteria for their distinction. Activation of territory T1b (including primary AC) showed the most robust level dependence over the large range of 48–102 dB in terms of activated volume and blood oxygen level dependent contrast (BOLD) signal intensity. The left nonprimary territory T2 also showed a good correlation of level with activated volume but, in contrast to T1b, not with BOLD signal intensity. These findings are compatible with level coding mechanisms observed in animal AC. A systematic increase of activation with level was not observed for T1a (anterior of Heschl's gyrus) and T3 (on the planum temporale). Thus these areas might not be specifically involved in processing of the overall intensity of FM. The rostral territory T1a of the left hemisphere exhibited highest activation when the FM sound level fell 12 dB below scanner noise. This supports the previously suggested special involvement of this territory in foreground-background decomposition tasks. Overall, AC of the left hemisphere showed a stronger level-dependence of signal intensity and activated volume than the right hemisphere. But any side differences of signal intensity at given levels were lateralized to right AC. This might point to an involvement of the right hemisphere in more specific aspects of FM processing than level coding.


2020 ◽  
Author(s):  
E. Brattico ◽  
A. Brusa ◽  
M.J. Dietz ◽  
T. Jacobsen ◽  
H.M. Fernandes ◽  
...  

ABSTRACTEvaluative beauty judgments are very common, but in spite of this commonality, are rarely studied in cognitive neuroscience. Here we investigated the neural and musical attributes of musical beauty using a naturalistic free-listening paradigm applied to behavioral and neuroimaging recordings and validated by experts’ judgments. In Study 1, 30 Western healthy adult participants rated continuously the perceived beauty of three musical pieces using a motion sensor. This allowed us to identify the passages in the three musical pieces that were inter-subjectively judged as beautiful or ugly. This informed the analysis for Study 2, where additional 36 participants were recorded with functional magnetic resonance imaging (fMRI) while they listened attentively to the same musical pieces as in Study 1. In Study 3, in order to identify the musicological features characterizing the passages that were consistently rated as beautiful or ugly in Study 1, we collected post-hoc questionnaires from 12 music-composition experts. Results from Study 2 evidenced focal regional activity in the orbitofrontal brain structure when listening to beautiful passages of music, irrespectively of the subjective reactions and individual listening biographies. In turn, the moments in the music that were consistently rated as ugly were associated with bilateral supratemporal activity. Effective connectivity analysis also discovered inhibition of auditory activation and neural communication with orbitofrontal cortex, especially in the right hemisphere, during listening to beautiful musical passages as opposed to intrinsic activation of auditory cortices and decreased coupling to orbitofrontal cortex during listening to ugly musical passages. Experts’ questionnaires indicated that the beautiful passages were more melodic, calm, sad, slow, tonal, traditional and simple than the ones negatively valenced. In sum, we identified a neural mechanism for inter-subjective beauty judgments of music in the supratemporal-orbitofrontal circuit, irrespectively of individual taste and listening biography. Furthermore, some invariance in objective musical attributes of beautiful and ugly passages was evidenced. Future studies might address the generalizability of the findings to non-Western listeners.


2015 ◽  
Vol 26 (03) ◽  
pp. 311-324 ◽  
Author(s):  
Kamakshi V. Gopal ◽  
Binu P. Thomas ◽  
Deng Mao ◽  
Hanzhang Lu

Background: Tinnitus, or ringing in the ears, is an extremely common ear disorder. However, it is a phenomenon that is very poorly understood and has limited treatment options. Purpose: The goals of this case study were to identify if the antioxidant acetyl-L-carnitine (ALCAR) provides relief from tinnitus, and to identify if subjective satisfaction after carnitine treatment is accompanied by changes in audiological and imaging measures. Research Design: Case Study. Patient Case: A 41-yr-old female with a history of hearing loss and tinnitus was interested in exploring the benefits of antioxidant therapy in reducing her tinnitus. The patient was evaluated using a standard audiological/tinnitus test battery and magnetic resonance imaging (MRI) recordings before carnitine treatment. After her physician's approval, the patient took 500 mg of ALCAR twice a day for 30 consecutive days. The audiological and MRI measures were repeated after ALCAR treatment. Data Collection and Analysis: Pure-tone audiometry, tympanometry, distortion-product otoacoustic emissions, tinnitus questionnaires (Tinnitus Handicap Inventory and Tinnitus Reaction Questionnaire), auditory brainstem response, functional MRI (fMRI), functional connectivity MRI, and cerebral blood flow evaluations were conducted before intake of ALCAR and were repeated 30 days after ALCAR treatment. Results: The patient’s pretreatment pure-tone audiogram indicated a mild sensorineural hearing loss at 6 kHz in the right ear and 4 kHz in the left ear. Posttreatment evaluation indicated marginal improvement in the patient’s pure-tone thresholds, but was sufficient to be classified as being clinically normal in both ears. Distortion-product otoacoustic emissions results showed increased overall emissions after ALCAR treatment. Subjective report from the patient indicated that her tinnitus was less annoying and barely noticeable during the day after treatment, and the posttreatment tinnitus questionnaire scores supported her statement. Auditory brainstem response peak V amplitude growth between stimulus intensity levels of 40–80 dB nHL indicated a reduction in growth for the posttreatment condition compared with the pretreatment condition. This was attributed to a possible active gating mechanism involving the auditory brainstem after ALCAR treatment. Posttreatment fMRI recordings in response to acoustic stimuli indicated a statistically significant reduction in brain activity in several regions of the brain, including the auditory cortex. Cerebral blood flow showed increased flow in the auditory cortex after treatment. The functional connectivity MRI indicated increased connectivity between the right and left auditory cortex, but a decrease in connectivity between the auditory cortex and some regions of the “default mode network,” namely the medial prefrontal cortex and posterior cingulate cortex. Conclusions: The changes observed in the objective and subjective test measures after ALCAR treatment, along with the patient’s personal observations, indicate that carnitine intake may be a valuable pharmacological option in the treatment of tinnitus.


2020 ◽  
Author(s):  
Jianxun Ren ◽  
Ting Xu ◽  
Danhong Wang ◽  
Meiling Li ◽  
Yuanxiang Lin ◽  
...  

Abstract Accumulating evidence shows that auditory cortex (AC) of humans, and other primates, is involved in more complex cognitive processes than feature segregation only, which are shaped by experience-dependent plasticity and thus likely show substantial individual variability. However, thus far, individual variability of ACs has been considered a methodological impediment rather than a phenomenon of theoretical importance. Here, we examined the variability of ACs using intrinsic functional connectivity patterns in humans and macaques. Our results demonstrate that in humans, interindividual variability is greater near the nonprimary than primary ACs, indicating that variability dramatically increases across the processing hierarchy. ACs are also more variable than comparable visual areas and show higher variability in the left than in the right hemisphere, which may be related to the left lateralization of auditory-related functions such as language. Intriguingly, remarkably similar modality differences and lateralization of variability were also observed in macaques. These connectivity-based findings are consistent with a confirmatory task-based functional magnetic resonance imaging analysis. The quantification of variability in auditory function, and the similar findings in both humans and macaques, will have strong implications for understanding the evolution of advanced auditory functions in humans.


1986 ◽  
Vol 56 (3) ◽  
pp. 683-701 ◽  
Author(s):  
H. E. Hefner ◽  
R. S. Heffner

Ten Japanese macaques were trained to discriminate between two types of Japanese macaque coo vocalizations before and after auditory cortex ablation. Five of the animals were tested following left unilateral ablation, whereas the other five were tested following right unilateral ablation. After postoperative testing, symmetrical lesions were made in the remaining hemisphere in two animals from each group and the effect of bilateral lesions was assessed. The animals were tested using a shock avoidance procedure. Unilateral ablation of left auditory cortex consistently resulted in an initial impairment in the ability to discriminate between the vocalizations with the animals regaining normal performance in 5-15 sessions. In contrast, right unilateral ablation had no detectable effect on the discrimination. Bilateral auditory cortex ablation rendered the animals permanently unable to discriminate between the coos. Although the monkeys could learn to discriminate the coos from noise and from 2- and 4-kHz tones, they had great difficulty in discriminating between the coos and tones in the same frequency range as the coos (i.e., 500 Hz and 1 kHz). The initial impairment following left unilateral lesions indicates that the ability to perceive species-specific vocalizations is lateralized to the left hemisphere. The observation that bilateral lesions abolish the discrimination indicates that the recovery in the left lesion cases was the result of the right hemisphere mediating the discrimination.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Francis A. M. Manno ◽  
Condon Lau ◽  
Juan Fernandez-Ruiz ◽  
Sinaí Hernandez-Cortes Manno ◽  
Shuk Han Cheng ◽  
...  

Abstract How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl’s gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.


2013 ◽  
Vol 27 (3) ◽  
pp. 142-148 ◽  
Author(s):  
Konstantinos Trochidis ◽  
Emmanuel Bigand

The combined interactions of mode and tempo on emotional responses to music were investigated using both self-reports and electroencephalogram (EEG) activity. A musical excerpt was performed in three different modes and tempi. Participants rated the emotional content of the resulting nine stimuli and their EEG activity was recorded. Musical modes influence the valence of emotion with major mode being evaluated happier and more serene, than minor and locrian modes. In EEG frontal activity, major mode was associated with an increased alpha activation in the left hemisphere compared to minor and locrian modes, which, in turn, induced increased activation in the right hemisphere. The tempo modulates the arousal value of emotion with faster tempi associated with stronger feeling of happiness and anger and this effect is associated in EEG with an increase of frontal activation in the left hemisphere. By contrast, slow tempo induced decreased frontal activation in the left hemisphere. Some interactive effects were found between mode and tempo: An increase of tempo modulated the emotion differently depending on the mode of the piece.


Sign in / Sign up

Export Citation Format

Share Document