scholarly journals Amplitude modulation coding in awake mice and squirrel monkeys

2018 ◽  
Vol 119 (5) ◽  
pp. 1753-1766 ◽  
Author(s):  
Nerissa E. G. Hoglen ◽  
Phillip Larimer ◽  
Elizabeth A. K. Phillips ◽  
Brian J. Malone ◽  
Andrea R. Hasenstaub

Both mice and primates are used to model the human auditory system. The primate order possesses unique cortical specializations that govern auditory processing. Given the power of molecular and genetic tools available in the mouse model, it is essential to understand the similarities and differences in auditory cortical processing between mice and primates. To address this issue, we directly compared temporal encoding properties of neurons in the auditory cortex of awake mice and awake squirrel monkeys (SQMs). Stimuli were drawn from a sinusoidal amplitude modulation (SAM) paradigm, which has been used previously both to characterize temporal precision and to model the envelopes of natural sounds. Neural responses were analyzed with linear template-based decoders. In both species, spike timing information supported better modulation frequency discrimination than rate information, and multiunit responses generally supported more accurate discrimination than single-unit responses from the same site. However, cortical responses in SQMs supported better discrimination overall, reflecting superior temporal precision and greater rate modulation relative to the spontaneous baseline and suggesting that spiking activity in mouse cortex was less strictly regimented by incoming acoustic information. The quantitative differences we observed between SQM and mouse cortex support the idea that SQMs offer advantages for modeling precise responses to fast envelope dynamics relevant to human auditory processing. Nevertheless, our results indicate that cortical temporal processing is qualitatively similar in mice and SQMs and thus recommend the mouse model for mechanistic questions, such as development and circuit function, where its substantial methodological advantages can be exploited. NEW & NOTEWORTHY To understand the advantages of different model organisms, it is necessary to directly compare sensory responses across species. Contrasting temporal processing in auditory cortex of awake squirrel monkeys and mice, with parametrically matched amplitude-modulated tone stimuli, reveals a similar role of timing information in stimulus encoding. However, disparities in response precision and strength suggest that anatomical and biophysical differences between squirrel monkeys and mice produce quantitative but not qualitative differences in processing strategy.

1998 ◽  
Vol 10 (4) ◽  
pp. 536-540 ◽  
Author(s):  
Pascal Belin ◽  
Monica Zilbovicius ◽  
Sophie Crozier ◽  
Lionel Thivard ◽  
and Anne Fontaine ◽  
...  

To investigate the role of temporal processing in language lateralization, we monitored asymmetry of cerebral activation in human volunteers using positron emission tomography (PET). Subjects were scanned during passive auditory stimulation with nonverbal sounds containing rapid (40 msec) or extended (200 msec) frequency transitions. Bilateral symmetric activation was observed in the auditory cortex for slow frequency transitions. In contrast, left-biased asymmetry was observed in response to rapid frequency transitions due to reduced response of the right auditory cortex. These results provide direct evidence that auditory processing of rapid acoustic transitions is lateralized in the human brain. Such functional asymmetry in temporal processing is likely to contribute to language lateralization from the lowest levels of cortical processing.


2018 ◽  
Author(s):  
Lasse Osterhagen ◽  
K. Jannis Hildebrandt

AbstractAge-related hearing loss (presbycusis) is caused by damage to the periphery as well as deterioration of central auditory processing. Gap detection is a paradigm to study age-related temporal processing deficits, which is assumed to be determined primarily by the latter. However, peripheral hearing loss is a strong confounding factor when using gap detection to measure temporal processing. In this study, we used mice from the CAST line, which is known to maintain excellent peripheral hearing, to rule out any contribution of peripheral hearing loss to gap detection performance. We employed an operant Go/No-go paradigm to obtain psychometric functions of gap in noise (GIN) detection at young and middle age. Besides, we measured auditory brainstem responses (ABR) and multiunit recordings in the auditory cortex (AC) in order to disentangle the processing stages of gap detection. We found detection thresholds around 0.6 ms in all measurement modalities. Detection thresholds did not increase with age. In the ABR, GIN stimuli are coded as onset responses to the noise that follows the gap, strikingly similar to the ABR of noise bursts in silence (NBIS). The simplicity of the neural representation of the gap together with the preservation of detection threshold in aged CAST mice suggests that GIN detection in the mouse is primarily determined by peripheral, not central processing.AbbreviaionsGINgap in noiseABRauditory brainstem responseACauditory cortexNBISnoise burst in silenceIINinhibitory interneuron


Author(s):  
Mattson Ogg ◽  
L. Robert Slevc

Music and language are uniquely human forms of communication. What neural structures facilitate these abilities? This chapter conducts a review of music and language processing that follows these acoustic signals as they ascend the auditory pathway from the brainstem to auditory cortex and on to more specialized cortical regions. Acoustic, neural, and cognitive mechanisms are identified where processing demands from both domains might overlap, with an eye to examples of experience-dependent cortical plasticity, which are taken as strong evidence for common neural substrates. Following an introduction describing how understanding musical processing informs linguistic or auditory processing more generally, findings regarding the major components (and parallels) of music and language research are reviewed: pitch perception, syntax and harmonic structural processing, semantics, timbre and speaker identification, attending in auditory scenes, and rhythm. Overall, the strongest evidence that currently exists for neural overlap (and cross-domain, experience-dependent plasticity) is in the brainstem, followed by auditory cortex, with evidence and the potential for overlap becoming less apparent as the mechanisms involved in music and speech perception become more specialized and distinct at higher levels of processing.


2014 ◽  
Vol 369 (1634) ◽  
pp. 20130090 ◽  
Author(s):  
Athanassios Protopapas

The ‘rapid temporal processing’ and the ‘temporal sampling framework’ hypotheses have been proposed to account for the deficits in language and literacy development seen in specific language impairment and dyslexia. This paper reviews these hypotheses and concludes that the proposed causal chains between the presumed auditory processing deficits and the observed behavioural manifestation of the disorders are vague and not well established empirically. Several problems and limitations are identified. Most data concern correlations between distantly related tasks, and there is considerable heterogeneity and variability in performance as well as concerns about reliability and validity. Little attention is paid to the distinction between ostensibly perceptual and metalinguistic tasks or between implicit and explicit modes of performance, yet measures are assumed to be pure indicators of underlying processes or representations. The possibility that diagnostic categories do not refer to causally and behaviourally homogeneous groups needs to be taken seriously, taking into account genetic and neurodevelopmental studies to construct multiple-risk models. To make progress in the field, cognitive models of each task must be specified, including performance domains that are predicted to be deficient versus intact, testing multiple indicators of latent constructs and demonstrating construct reliability and validity.


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


2021 ◽  
Author(s):  
Florian Occelli ◽  
Florian Hasselmann ◽  
Jérôme Bourien ◽  
Jean-Luc Puel ◽  
Nathalie Desvignes ◽  
...  

Abstract People are increasingly exposed to environmental noise through the cumulation of occupational and recreational activities, which is considered harmless to the auditory system, if the sound intensity remains <80 dB. However, recent evidence of noise-induced peripheral synaptic damage and central reorganizations in the auditory cortex, despite normal audiometry results, has cast doubt on the innocuousness of lifetime exposure to environmental noise. We addressed this issue by exposing adult rats to realistic and nontraumatic environmental noise, within the daily permissible noise exposure limit for humans (80 dB sound pressure level, 8 h/day) for between 3 and 18 months. We found that temporary hearing loss could be detected after 6 months of daily exposure, without leading to permanent hearing loss or to missing synaptic ribbons in cochlear hair cells. The degraded temporal representation of sounds in the auditory cortex after 18 months of exposure was very different from the effects observed after only 3 months of exposure, suggesting that modifications to the neural code continue throughout a lifetime of exposure to noise.


Sign in / Sign up

Export Citation Format

Share Document