scholarly journals Norepinephrine enhances song responsiveness and encoding in the auditory forebrain of male zebra finches

2018 ◽  
Vol 119 (1) ◽  
pp. 209-220 ◽  
Author(s):  
Vanessa Lee ◽  
Benjamin A. Pawlisch ◽  
Matheus Macedo-Lima ◽  
Luke Remage-Healey

Norepinephrine (NE) can dynamically modulate excitability and functional connectivity of neural circuits in response to changes in external and internal states. Regulation by NE has been demonstrated extensively in mammalian sensory cortices, but whether NE-dependent modulation in sensory cortex alters response properties in downstream sensorimotor regions is less clear. Here we examine this question in male zebra finches, a songbird species with complex vocalizations and a well-defined neural network for auditory processing of those vocalizations. We test the hypothesis that NE modulates auditory processing and encoding, using paired extracellular electrophysiology recordings and pattern classifier analyses. We report that a NE infusion into the auditory cortical region NCM (caudomedial nidopallium; analogous to mammalian secondary auditory cortex) enhances the auditory responses, burst firing, and coding properties of single NCM neurons. Furthermore, we report that NE-dependent changes in NCM coding properties, but not auditory response strength, are transmitted downstream to the sensorimotor nucleus HVC. Finally, NE modulation in the NCM of males is qualitatively similar to that observed in females: in both sexes, NE increases auditory response strengths. However, we observed a sex difference in the mechanism of enhancement: whereas NE increases response strength in females by decreasing baseline firing rates, NE increases response strength in males by increasing auditory-evoked activity. Therefore, NE signaling exhibits a compensatory sex difference to achieve a similar, state-dependent enhancement in signal-to-noise ratio and coding accuracy in males and females. In summary, our results provide further evidence for adrenergic regulation of sensory processing and modulation of auditory/sensorimotor functional connectivity. NEW & NOTEWORTHY This study documents that the catecholamine norepinephrine (also known as noradrenaline) acts in the auditory cortex to shape local processing of complex sound stimuli. Moreover, it also enhances the coding accuracy of neurons in the auditory cortex as well as in the downstream sensorimotor cortex. Finally, this study shows that while the sensory-enhancing effects of norepinephrine are similar in males and females, there are sex differences in the mode of action.

2020 ◽  
Author(s):  
Sara Momtaz ◽  
Deborah W. Moncrieff ◽  
Gavin M. Bidelman

ABSTRACTChildren diagnosed with auditory processing disorder (APD) show deficits in processing complex sounds that are associated with difficulties in higher-order language, learning, cognitive, and communicative functions. Amblyaudia (AMB) is a subcategory of APD characterized by abnormally large ear asymmetries in dichotic listening tasks. Here, we examined frequency-specific neural oscillations and functional connectivity via high-density EEG in children with and without AMB during passive listening of nonspeech stimuli. Time-frequency maps of these “brain rhythms” revealed stronger phase-locked beta-gamma (∼35 Hz) oscillations in AMB participants within bilateral auditory cortex for sounds presented to the right ear, suggesting a hypersynchronization and imbalance of auditory neural activity. Brain-behavior correlations revealed neural asymmetries in cortical responses predicted the larger than normal right-ear advantage seen in participants with AMB. Additionally, we found weaker functional connectivity in the AMB group from right to left auditory cortex, despite their stronger neural responses overall. Our results reveal abnormally large auditory sensory encoding and an imbalance in communication between cerebral hemispheres (ipsi-to -contralateral signaling) in AMB. These neurophysiological changes might lead to the functionally poorer behavioral capacity to integrate information between the two ears in children with AMB.


2014 ◽  
Author(s):  
Srivatsun Sadagopan ◽  
Nesibe Z Temiz ◽  
Henning U Voss

Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.


Author(s):  
Vidhusha Srinivasan ◽  
N. Udayakumar ◽  
Kavitha Anandan

Background: The spectrum of autism encompasses High Functioning Autism (HFA) and Low Functioning Autism (LFA). Brain mapping studies have revealed that autism individuals have overlaps in brain behavioural characteristics. Generally, high functioning individuals are known to exhibit higher intelligence and better language processing abilities. However, specific mechanisms associated with their functional capabilities are still under research. Objective: This work addresses the overlapping phenomenon present in autism spectrum through functional connectivity patterns along with brain connectivity parameters and distinguishes the classes using deep belief networks. Methods: The task-based functional Magnetic Resonance Images (fMRI) of both high and low functioning autistic groups were acquired from ABIDE database, for 58 low functioning against 43 high functioning individuals while they were involved in a defined language processing task. The language processing regions of the brain, along with Default Mode Network (DMN) have been considered for the analysis. The functional connectivity maps have been plotted through graph theory procedures. Brain connectivity parameters such as Granger Causality (GC) and Phase Slope Index (PSI) have been calculated for the individual groups. These parameters have been fed to Deep Belief Networks (DBN) to classify the subjects under consideration as either LFA or HFA. Results: Results showed increased functional connectivity in high functioning subjects. It was found that the additional interaction of the Primary Auditory Cortex lying in the temporal lobe, with other regions of interest complimented their enhanced connectivity. Results were validated using DBN measuring the classification accuracy of 85.85% for high functioning and 81.71% for the low functioning group. Conclusion: Since it is known that autism involves enhanced, but imbalanced components of intelligence, the reason behind the supremacy of high functioning group in language processing and region responsible for enhanced connectivity has been recognized. Therefore, this work that suggests the effect of Primary Auditory Cortex in characterizing the dominance of language processing in high functioning young adults seems to be highly significant in discriminating different groups in autism spectrum.


Author(s):  
Mattson Ogg ◽  
L. Robert Slevc

Music and language are uniquely human forms of communication. What neural structures facilitate these abilities? This chapter conducts a review of music and language processing that follows these acoustic signals as they ascend the auditory pathway from the brainstem to auditory cortex and on to more specialized cortical regions. Acoustic, neural, and cognitive mechanisms are identified where processing demands from both domains might overlap, with an eye to examples of experience-dependent cortical plasticity, which are taken as strong evidence for common neural substrates. Following an introduction describing how understanding musical processing informs linguistic or auditory processing more generally, findings regarding the major components (and parallels) of music and language research are reviewed: pitch perception, syntax and harmonic structural processing, semantics, timbre and speaker identification, attending in auditory scenes, and rhythm. Overall, the strongest evidence that currently exists for neural overlap (and cross-domain, experience-dependent plasticity) is in the brainstem, followed by auditory cortex, with evidence and the potential for overlap becoming less apparent as the mechanisms involved in music and speech perception become more specialized and distinct at higher levels of processing.


2016 ◽  
Vol 124 (4) ◽  
pp. 766-778 ◽  
Author(s):  
Catherine Elizabeth Warnaby ◽  
Marta Seretny ◽  
Roísín Ní Mhuircheartaigh ◽  
Richard Rogers ◽  
Saad Jbabdi ◽  
...  

Abstract Background It has been postulated that a small cortical region could be responsible for the loss of behavioral responsiveness (LOBR) during general anesthesia. The authors hypothesize that any brain region demonstrating reduced activation to multisensory external stimuli around LOBR represents a key cortical gate underlying this transition. Furthermore, the authors hypothesize that this localized suppression is associated with breakdown in frontoparietal communication. Methods During both simultaneous electroencephalography and functional magnetic resonance imaging (FMRI) and electroencephalography data acquisition, 15 healthy volunteers experienced an ultraslow induction with propofol anesthesia while a paradigm of multisensory stimulation (i.e., auditory tones, words, and noxious pain stimuli) was presented. The authors performed separate analyses to identify changes in (1) stimulus-evoked activity, (2) functional connectivity, and (3) frontoparietal synchrony associated with LOBR. Results By using an FMRI conjunction analysis, the authors demonstrated that stimulus-evoked activity was suppressed in the right dorsal anterior insula cortex (dAIC) to all sensory modalities around LOBR. Furthermore, the authors found that the dAIC had reduced functional connectivity with the frontoparietal regions, specifically the dorsolateral prefrontal cortex and inferior parietal lobule, after LOBR. Finally, reductions in the electroencephalography power synchrony between electrodes located in these frontoparietal regions were observed in the same subjects after LOBR. Conclusions The authors conclude that the dAIC is a potential cortical gate responsible for LOBR. Suppression of dAIC activity around LOBR was associated with disruption in the frontoparietal networks that was measurable using both electroencephalography synchrony and FMRI connectivity analyses.


2014 ◽  
Vol 115 (suppl_1) ◽  
Author(s):  
Yujie Zhu ◽  
Steven M Pogwizd

Introduction: Females can be more arrhythmogenic than males, and this sex difference can persist with development of chronic heart failure (CHF). The aim of this study was to investigate sex differences in the arrhythmogenic substrate in control dogs and in a new arrhythmogenic canine model of CHF. Methods: CHF was induced in 30 dogs by aortic insufficiency and aortic constriction. Holter monitoring assessed VT and PVCs from 30 dogs, as well as traditional HRV measures and nonlinear dynamics (including correlation dimension (CD), detrended fluctuations analysis α1 (DFAα1), and Shannon entropy (SE)) at baseline, 240 days (240d) and 720 days (720d) after CHF induction. Results: At baseline, females had lower LF/HF (0.27±0.03 vs 0.33±0.02, p=0.04), CD (1.60±0.17 vs 2.21±0.15, p=0.01), DFAα1 (0.62±0.03 vs 0.72±0.03, p=0.03), and SE (2.99±0.02 vs 3.10±0.03, p=0.03 vs males). Females lacked circadian variation in LF/HF, DFAα1, and SE while males had circadian variation in all of these. Of 11 dogs with frequent runs of VT and PVCs, 95% and 91% of total VT runs and total PVCs, respectively, were in females. With CHF, all these linear and nonlinear parameters progressively declined in males and females. CHF females had less decline in LF/HF than males so that by 720 days there was no more sex difference (0.24±0.06, 0.17±0.03 in females vs 0.22±0.05, 0.18±0.01 in males at 240d, 720d). However, for nonlinear parameters of CD, DFAα1, and SE, CHF females had lower values than males (CD: 1.56±0.21, 0.99±0.32 vs 1.87±0.24, 1.50±0.34; DFAα1: 0.51±0.05, 0.43±0.04 vs 0.54±0.07, 0.48±0.04; and SE 2.93±0.08, 2.76±0.08 vs 3.01±0.11, 2.91±0.04 in females vs males at 240d, 720d). With CHF, circadian variation in CD, DFAα1, and SE were lost in both males and females. Conclusions: There are sex differences in the arrhythmogenic substrate in control dogs and in this new arrhythmogenic canine model of moderate CHF. At baseline, females have lower sympathetic stimulation, reduced cardiac chaos, and loss of circadian variation in nonlinear dynamics. With CHF, sex differences in nonlinear dynamics persist; this reflects a loss of complexity and fractal properties that could contribute to increased arrhythmias in female CHF dogs.


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2019 ◽  
Vol 121 (2) ◽  
pp. 530-548 ◽  
Author(s):  
Rachel C. Yuan ◽  
Sarah W. Bottjer

Procedural skill learning requires iterative comparisons between feedback of self-generated motor output and a goal sensorimotor pattern. In juvenile songbirds, neural representations of both self-generated behaviors (each bird’s own immature song) and the goal motor pattern (each bird’s adult tutor song) are essential for vocal learning, yet little is known about how these behaviorally relevant stimuli are encoded. We made extracellular recordings during song playback in anesthetized juvenile and adult zebra finches ( Taeniopygia guttata) in adjacent cortical regions RA (robust nucleus of the arcopallium), AId (dorsal intermediate arcopallium), and RA cup, each of which is well situated to integrate auditory-vocal information: RA is a motor cortical region that drives vocal output, AId is an adjoining cortical region whose projections converge with basal ganglia loops for song learning in the dorsal thalamus, and RA cup surrounds RA and receives inputs from primary and secondary auditory cortex. We found strong developmental differences in neural selectivity within RA, but not in AId or RA cup. Juvenile RA neurons were broadly responsive to multiple songs but preferred juvenile over adult vocal sounds; in addition, spiking responses lacked consistent temporal patterning. By adulthood, RA neurons responded most strongly to each bird’s own song with precisely timed spiking activity. In contrast, we observed a complete lack of song responsivity in both juvenile and adult AId, even though this region receives song-responsive inputs. A surprisingly large proportion of sites in RA cup of both juveniles and adults did not respond to song playback, and responsive sites showed little evidence of song selectivity. NEW & NOTEWORTHY Motor skill learning entails changes in selectivity for behaviorally relevant stimuli across cortical regions, yet the neural representation of these stimuli remains understudied. We investigated how information important for vocal learning in zebra finches is represented in regions analogous to infragranular layers of motor and auditory cortices during vs. after the developmentally regulated learning period. The results provide insight into how neurons in higher level stages of cortical processing represent stimuli important for motor skill learning.


2021 ◽  
Author(s):  
Yuanqing Zhang ◽  
Xiaohui Wang ◽  
Lin Zhu ◽  
Siyi Bai ◽  
Rui Li ◽  
...  

Cortical feedback has long been considered crucial for modulation of sensory processing. In the mammalian auditory system, studies have suggested that corticofugal feedback can have excitatory, inhibitory, or both effects on the response of subcortical neurons, leading to controversies regarding the role of corticothalamic influence. This has been further complicated by studies conducted under different brain states. In the current study, we used cryo-inactivation in the primary auditory cortex (A1) to examine the role of corticothalamic feedback on medial geniculate body (MGB) neurons in awake marmosets. The primary effects of A1 inactivation were a frequency-specific decrease in the auditory response of MGB neurons coupled with an increased spontaneous firing rate, which together resulted in a decrease in the signal-to-noise ratio. In addition, we report for the first-time that A1 robustly modulated the long-lasting sustained response of MGB neurons which changed the frequency tuning after A1 inactivation, e.g., neurons with sharp tuning increased tuning bandwidth whereas those with broad tuning decreased tuning bandwidth. Taken together, our results demonstrate that corticothalamic modulation in awake marmosets serves to enhance sensory processing in a way similar to center-surround models proposed in visual and somatosensory systems, a finding which supports common principles of corticothalamic processing across sensory systems.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


Sign in / Sign up

Export Citation Format

Share Document