scholarly journals Music-selective cortex is sensitive to structure in both pitch and time

2021 ◽  
Author(s):  
Dana L Boebinger ◽  
Sam V Norman-Haignere ◽  
Josh H McDermott ◽  
Nancy G Kanwisher

Converging evidence suggests that neural populations within human non-primary auditory cortex respond selectively to music. These neural populations respond strongly to a wide range of music stimuli, and weakly to other natural sounds and to synthetic control stimuli matched to music in many acoustic properties, suggesting that they are driven by high-level musical features. What are these features? Here we used fMRI to test the extent to which musical structure in pitch and time contribute to music-selective neural responses. We used voxel decomposition to derive music-selective response components in each of 15 participants individually, and then measured the response of these components to synthetic music clips in which we selectively disrupted musical structure by scrambling either the note pitches and/or onset times. Both types of scrambling produced lower responses compared to when melodic or rhythmic structure was intact. This effect was much stronger in the music-selective component than in the other response components, even those with substantial spatial overlap with the music component. We further found no evidence for any cortical regions sensitive to pitch but not time structure, or vice versa. Our results suggest that the processing of melody and rhythm are intertwined within auditory cortex.

2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P Schwartz ◽  
Charles R Heller ◽  
Jacob R Pennington ◽  
Stephen V David

Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.


2021 ◽  
Author(s):  
Drew Cappotto ◽  
HiJee Kang ◽  
Kongyan Li ◽  
Lucia Melloni ◽  
Jan Schnupp ◽  
...  

AbstractRecent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.


2017 ◽  
Author(s):  
Lars Buesing ◽  
Ana Calabrese ◽  
John P. Cunningham ◽  
Sarah M. N. Woolley ◽  
Liam Paninski

AbstractVocal communication evokes robust responses in primary auditory cortex (A1) of songbirds, and single neurons from superficial and deep regions of A1 have been shown to respond selectively to songs over complex, synthetic sounds. However, little is known about how this song selectivity arises and manifests itself on the level of networks of neurons in songbird A1. Here, we examined the network-level coding of song and synthetic sounds in A1 by simultaneously recording the responses of multiple neurons in unanesthetized zebra finches. We developed a latent factor model of the joint simultaneous activity of these neural populations, and found that the shared variability in the activity has a surprisingly simple structure; it is dominated by an unobserved latent source with one degree-of-freedom. This simple model captures the structure of the correlated activity in these populations in both spontaneous and stimulus-driven conditions, and given both song and synthetic stimuli. The inferred latent variability is strongly suppressed under stimulation, consistent with similar observations in a range of mammalian cortical regions.


2018 ◽  
Author(s):  
Huan-huan Zeng ◽  
Jun-feng Huang ◽  
Ming Chen ◽  
Yun-qing Wen ◽  
Zhi-ming Shen ◽  
...  

AbstractMarmoset has emerged as a useful non-human primate species for studying the brain structure and function. Previous studies on the mouse primary auditory cortex (A1) showed that neurons with preferential frequency tuning responses are mixed within local cortical regions, despite a large-scale tonotopic organization. Here we found that frequency tuning properties of marmoset A1 neurons are highly uniform within local cortical regions. We first defined tonotopic map of A1 using intrinsic optical imaging, and then used in vivo two-photon calcium imaging of large neuronal populations to examine the tonotopic preference at the single-cell level. We found that tuning preferences of layer 2/3 neurons were highly homogeneous over hundreds of micrometers in both horizontal and vertical directions. Thus, marmoset A1 neurons are distributed in a tonotopic manner at both macro- and microscopic levels. Such organization is likely to be important for the organization of auditory circuits in the primate brain.


Author(s):  
Dana Boebinger ◽  
Samuel Norman-Haignere ◽  
Josh H. McDermott ◽  
Nancy Kanwisher

Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.


2020 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P. Schwartz ◽  
Charlie R. Heller ◽  
Jacob R. Pennington ◽  
Stephen V. David

AbstractThe brain’s representation of sound is influenced by multiple aspects of internal behavioral state. Following engagement in an auditory discrimination task, both generalized arousal and task-specific control signals can influence auditory processing. To isolate effects of these state variables on auditory processing, we recorded single-unit activity from primary auditory cortex (A1) and the inferior colliculus (IC) of ferrets as they engaged in a go/no-go tone detection task while simultaneously monitoring arousal via pupillometry. We used a generalized linear model to isolate the contributions of task engagement and arousal on spontaneous and evoked neural activity. Fluctuations in pupil-indexed arousal were correlated with task engagement, but these two variables could be dissociated in most experiments. In both A1 and IC, individual units could be modulated by task and/or arousal, but the two state variables affected independent neural populations. Arousal effects were more prominent in IC, while arousal and engagement effects occurred with about equal frequency in A1. These results indicate that some changes in neural activity attributed to task engagement in previous studies should in fact be attributed to global fluctuations in arousal. Arousal effects also explain some persistent changes in neural activity observed in passive conditions post-behavior. Together, these results indicate a hierarchy in the auditory system, where generalized arousal enhances activity in the midbrain and cortex, while task-specific changes in neural coding become more prominent in cortex.


Sign in / Sign up

Export Citation Format

Share Document