scholarly journals Dissociation of tonotopy and pitch in human auditory cortex

2020 ◽  
Author(s):  
Emily J. Allen ◽  
Juraj Mesik ◽  
Kendrick N. Kay ◽  
Andrew J. Oxenham

SUMMARYFrequency-to-place mapping, or tonotopy, is a fundamental organizing principle from the earliest stages of auditory processing in the cochlea to subcortical and cortical regions. Although cortical maps are referred to as tonotopic, previous studies employed sounds that covary in spectral content and higher-level perceptual features such as pitch, making it unclear whether these maps are inherited from cochlear organization and are indeed tonotopic, or instead reflect transformations based on higher-level features. We used high-resolution fMRI to measure BOLD responses in 10 participants as they listened to pure tones that varied in frequency or complex tones that independently varied in either spectral content or fundamental frequency (pitch). We show that auditory cortical gradients are in fact a mixture of maps organized both by spectral content and pitch. Consistent with hierarchical organization, primary regions were tuned predominantly to spectral content, whereas higher-level pitch tuning was observed bilaterally in surrounding non-primary regions.

Author(s):  
Mattson Ogg ◽  
L. Robert Slevc

Music and language are uniquely human forms of communication. What neural structures facilitate these abilities? This chapter conducts a review of music and language processing that follows these acoustic signals as they ascend the auditory pathway from the brainstem to auditory cortex and on to more specialized cortical regions. Acoustic, neural, and cognitive mechanisms are identified where processing demands from both domains might overlap, with an eye to examples of experience-dependent cortical plasticity, which are taken as strong evidence for common neural substrates. Following an introduction describing how understanding musical processing informs linguistic or auditory processing more generally, findings regarding the major components (and parallels) of music and language research are reviewed: pitch perception, syntax and harmonic structural processing, semantics, timbre and speaker identification, attending in auditory scenes, and rhythm. Overall, the strongest evidence that currently exists for neural overlap (and cross-domain, experience-dependent plasticity) is in the brainstem, followed by auditory cortex, with evidence and the potential for overlap becoming less apparent as the mechanisms involved in music and speech perception become more specialized and distinct at higher levels of processing.


1990 ◽  
Vol 64 (1) ◽  
pp. 282-298 ◽  
Author(s):  
D. W. Schwarz ◽  
R. W. Tomlinson

1. The auditory cortex in the superior temporal region of the alert rhesus monkey was explored for neuronal responses to pure and harmonic complex tones and noise. The monkeys had been previously trained to recognize the similarity between harmonic complex tones with and without fundamentals. Because this suggested that they could preceive the pitch of the lacking fundamental similarly to humans, we searched for neuronal responses relevant to this perception. 2. Combination-sensitive neurons that might explain pitch perception were not found in the surveyed cortical regions. Such neurons would exhibit similar responses to stimuli with similar periodicities but differing spectral compositions. The fact that no neuron with responses to a fundamental frequency responded also to a corresponding harmonic complex missing the fundamental indicates that cochlear distortion products at the fundamental may not have been responsible for missing fundamental-pitch perception in these monkeys. 3. Neuronal responses can be expressed as relatively simple filter functions. Neurons with excitatory response areas (tuning curves) displayed various inhibitory sidebands at lower and/or higher frequencies. Thus responses varied along a continuum of combined excitatory and inhibitory filter functions. 4. Five elementary response classes along this continuum are presented to illustrate the range of response patterns. 5. “Filter (F) neurons” had little or no inhibitory sidebands and responded well when any component of a complex tone entered its pure-tone receptive field. Bandwidths increased with intensity. Filter functions of these neurons were thus similar to cochlear nerve-fiber tuning curves. 6. ”High-resolution filter (HRF) neurons” displayed narrow tuning curves with narrowband widths that displayed little growth with intensity. Such cells were able to resolve up to the lowest seven components of harmonic complex tones as distinct responses. They also responded well to wideband stimuli. 7. “Fundamental (F0) neurons” displayed similar tuning bandwidths for pure tones and corresponding fundamentals of harmonic complexes. This response pattern was due to lower harmonic complexes. This response pattern was due to lower inhibitory sidebands. Thus these cells cannot respond to missing fundamentals of harmonic complexes. Only physically present components in the pure-tone receptive field would excite such neurons. 8. Cells with no or very weak responses to pure tones or other narrowband stimuli responded well to harmonic complexes or wideband noise.(ABSTRACT TRUNCATED AT 400 WORDS)


2020 ◽  
Vol 46 (5) ◽  
pp. 1053-1059
Author(s):  
Victor W Kilonzo ◽  
Robert A Sweet ◽  
Jill R Glausier ◽  
Matthew W Pitts

Abstract Aberrant processing of auditory stimuli is a prominent feature of schizophrenia (SZ). Prior studies have chronicled histological abnormalities in the auditory cortex of SZ subjects, but whether deficits exist at upstream, subcortical levels has yet to be established. En route to the auditory cortex, ascending information is integrated in the inferior colliculus (IC), a highly gamma amino butyric acid (GABA) ergic midbrain structure that is critically involved in auditory processing. The IC contains a dense population of parvalbumin-immunoreactive interneurons (PVIs), a cell type characterized by increased metabolic demands and enhanced vulnerability to oxidative stress. During development, PVIs are preferentially surrounded by perineuronal nets (PNNs), specialized extracellular matrix structures that promote redox homeostasis and excitatory/inhibitory balance. Moreover, in SZ, deficits in PVIs, PNNs, and the GABA synthesizing enzyme, glutamic acid decarboxylase (Gad67), have been extensively documented in cortical regions. Yet, whether similar impairments exist in the IC is currently unknown. Thus, we compared IC samples of age- and sex-matched pairs of SZ and unaffected control subjects. SZ subjects exhibited lower levels of Gad67 immunoreactivity and a decreased density of PVIs and PNNs within the IC. These findings provide the first histological evidence of IC GABAergic abnormalities in SZ and suggest that SZ-related auditory dysfunction may stem, in part, from altered IC inhibitory tone.


2008 ◽  
Vol 100 (2) ◽  
pp. 888-906 ◽  
Author(s):  
Daniel Bendor ◽  
Xiaoqin Wang

The core region of primate auditory cortex contains a primary and two primary-like fields (AI, primary auditory cortex; R, rostral field; RT, rostrotemporal field). Although it is reasonable to assume that multiple core fields provide an advantage for auditory processing over a single primary field, the differential roles these fields play and whether they form a functional pathway collectively such as for the processing of spectral or temporal information are unknown. In this report we compare the response properties of neurons in the three core fields to pure tones and sinusoidally amplitude modulated tones in awake marmoset monkeys ( Callithrix jacchus). The main observations are as follows. ( 1) All three fields are responsive to spectrally narrowband sounds and are tonotopically organized. ( 2) Field AI responds more strongly to pure tones than fields R and RT. ( 3) Field RT neurons have lower best sound levels than those of neurons in fields AI and R. In addition, rate-level functions in field RT are more commonly nonmonotonic than in fields AI and R. ( 4) Neurons in fields RT and R have longer minimum latencies than those of field AI neurons. ( 5) Fields RT and R have poorer stimulus synchronization than that of field AI to amplitude-modulated tones. ( 6) Between the three core fields the more rostral regions (R and RT) have narrower firing-rate–based modulation transfer functions than that of AI. This effect was seen only for the nonsynchronized neurons. Synchronized neurons showed no such trend.


Author(s):  
Josef P. Rauschecker

When one talks about hearing, some may first imagine the auricle (or external ear), which is the only visible part of the auditory system in humans and other mammals. Its shape and size vary among people, but it does not tell us much about a person’s abilities to hear (except perhaps their ability to localize sounds in space, where the shape of the auricle plays a certain role). Most of what is used for hearing is inside the head, particularly in the brain. The inner ear transforms mechanical vibrations into electrical signals; then the auditory nerve sends these signals into the brainstem, where intricate preprocessing occurs. Although auditory brainstem mechanisms are an important part of central auditory processing, it is the processing taking place in the cerebral cortex (with the thalamus as the mediator), which enables auditory perception and cognition. Human speech and the appreciation of music can hardly be imagined without a complex cortical network of specialized regions, each contributing different aspects of auditory cognitive abilities. During the evolution of these abilities in higher vertebrates, especially birds and mammals, the cortex played a crucial role, so a great deal of what is referred to as central auditory processing happens there. Whether it is the recognition of one’s mother’s voice, listening to Pavarotti singing or Yo-Yo Ma playing the cello, hearing or reading Shakespeare’s sonnets, it will evoke electrical vibrations in the auditory cortex, but it does not end there. Large parts of frontal and parietal cortex receive auditory signals originating in auditory cortex, forming processing streams for auditory object recognition and auditory-motor control, before being channeled into other parts of the brain for comprehension and enjoyment.


2008 ◽  
Vol 100 (3) ◽  
pp. 1622-1634 ◽  
Author(s):  
Ling Qin ◽  
JingYu Wang ◽  
Yu Sato

Previous studies in anesthetized animals reported that the primary auditory cortex (A1) showed homogenous phasic responses to FM tones, namely a transient response to a particular instantaneous frequency when FM sweeps traversed a neuron's tone-evoked receptive field (TRF). Here, in awake cats, we report that A1 cells exhibit heterogeneous FM responses, consisting of three patterns. The first is continuous firing when a slow FM sweep traverses the receptive field of a cell with a sustained tonal response. The duration and amplitude of FM response decrease with increasing sweep speed. The second pattern is transient firing corresponding to the cell's phasic tonal response. This response could be evoked only by a fast FM sweep through the cell's TRF, suggesting a preference for fast FM. The third pattern was associated with the off response to pure tones and was composed of several discrete response peaks during slow FM stimulus. These peaks were not predictable from the cell's tonal response but reliably reflected the time when FM swept across specific frequencies. Our A1 samples often exhibited a complex response pattern, combining two or three of the basic patterns above, resulting in a heterogeneous response population. The diversity of FM responses suggests that A1 use multiple mechanisms to fully represent the whole range of FM parameters, including frequency extent, sweep speed, and direction.


2010 ◽  
Vol 104 (4) ◽  
pp. 2075-2081 ◽  
Author(s):  
Lars Strother ◽  
Adrian Aldcroft ◽  
Cheryl Lavell ◽  
Tutis Vilis

Functional MRI (fMRI) studies of the human object recognition system commonly identify object-selective cortical regions by comparing blood oxygen level–dependent (BOLD) responses to objects versus those to scrambled objects. Object selectivity distinguishes human lateral occipital cortex (LO) from earlier visual areas. Recent studies suggest that, in addition to being object selective, LO is retinotopically organized; LO represents both object and location information. Although LO responses to objects have been shown to depend on location, it is not known whether responses to scrambled objects vary similarly. This is important because it would suggest that the degree of object selectivity in LO does not vary with retinal stimulus position. We used a conventional functional localizer to identify human visual area LO by comparing BOLD responses to objects versus scrambled objects presented to either the upper (UVF) or lower (LVF) visual field. In agreement with recent findings, we found evidence of position-dependent responses to objects. However, we observed the same degree of position dependence for scrambled objects and thus object selectivity did not differ for UVF and LVF stimuli. We conclude that, in terms of BOLD response, LO discriminates objects from non-objects equally well in either visual field location, despite stronger responses to objects in the LVF.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


2021 ◽  
Author(s):  
Florian Occelli ◽  
Florian Hasselmann ◽  
Jérôme Bourien ◽  
Jean-Luc Puel ◽  
Nathalie Desvignes ◽  
...  

Abstract People are increasingly exposed to environmental noise through the cumulation of occupational and recreational activities, which is considered harmless to the auditory system, if the sound intensity remains <80 dB. However, recent evidence of noise-induced peripheral synaptic damage and central reorganizations in the auditory cortex, despite normal audiometry results, has cast doubt on the innocuousness of lifetime exposure to environmental noise. We addressed this issue by exposing adult rats to realistic and nontraumatic environmental noise, within the daily permissible noise exposure limit for humans (80 dB sound pressure level, 8 h/day) for between 3 and 18 months. We found that temporary hearing loss could be detected after 6 months of daily exposure, without leading to permanent hearing loss or to missing synaptic ribbons in cochlear hair cells. The degraded temporal representation of sounds in the auditory cortex after 18 months of exposure was very different from the effects observed after only 3 months of exposure, suggesting that modifications to the neural code continue throughout a lifetime of exposure to noise.


Sign in / Sign up

Export Citation Format

Share Document