scholarly journals Simultaneous mnemonic and predictive representations in the auditory cortex

2021 ◽  
Author(s):  
Drew Cappotto ◽  
HiJee Kang ◽  
Kongyan Li ◽  
Lucia Melloni ◽  
Jan Schnupp ◽  
...  

AbstractRecent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.

2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


eLife ◽  
2013 ◽  
Vol 2 ◽  
Author(s):  
Dan FM Goodman ◽  
Victor Benichoux ◽  
Romain Brette

The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies.


2020 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P. Schwartz ◽  
Charlie R. Heller ◽  
Jacob R. Pennington ◽  
Stephen V. David

AbstractThe brain’s representation of sound is influenced by multiple aspects of internal behavioral state. Following engagement in an auditory discrimination task, both generalized arousal and task-specific control signals can influence auditory processing. To isolate effects of these state variables on auditory processing, we recorded single-unit activity from primary auditory cortex (A1) and the inferior colliculus (IC) of ferrets as they engaged in a go/no-go tone detection task while simultaneously monitoring arousal via pupillometry. We used a generalized linear model to isolate the contributions of task engagement and arousal on spontaneous and evoked neural activity. Fluctuations in pupil-indexed arousal were correlated with task engagement, but these two variables could be dissociated in most experiments. In both A1 and IC, individual units could be modulated by task and/or arousal, but the two state variables affected independent neural populations. Arousal effects were more prominent in IC, while arousal and engagement effects occurred with about equal frequency in A1. These results indicate that some changes in neural activity attributed to task engagement in previous studies should in fact be attributed to global fluctuations in arousal. Arousal effects also explain some persistent changes in neural activity observed in passive conditions post-behavior. Together, these results indicate a hierarchy in the auditory system, where generalized arousal enhances activity in the midbrain and cortex, while task-specific changes in neural coding become more prominent in cortex.


2008 ◽  
Vol 99 (4) ◽  
pp. 1628-1642 ◽  
Author(s):  
Shveta Malhotra ◽  
G. Christopher Stecker ◽  
John C. Middlebrooks ◽  
Stephen G. Lomber

We examined the contributions of primary auditory cortex (A1) and the dorsal zone of auditory cortex (DZ) to sound localization behavior during separate and combined unilateral and bilateral deactivation. From a central visual fixation point, cats learned to make an orienting response (head movement and approach) to a 100-ms broadband noise burst emitted from a central speaker or one of 12 peripheral sites (located in front of the animal, from left 90° to right 90°, at 15° intervals) along the horizontal plane. Following training, each cat was implanted with separate cryoloops over A1 and DZ bilaterally. Unilateral deactivation of A1 or DZ or simultaneous unilateral deactivation of A1 and DZ (A1/DZ) resulted in spatial localization deficits confined to the contralateral hemifield, whereas sound localization to positions in the ipsilateral hemifield remained unaffected. Simultaneous bilateral deactivation of both A1 and DZ resulted in sound localization performance dropping from near-perfect to chance (7.7% correct) across the entire field. Errors made during bilateral deactivation of A1/DZ tended to be confined to the same hemifield as the target. However, unlike the profound sound localization deficit that occurs when A1 and DZ are deactivated together, deactivation of either A1 or DZ alone produced partial and field-specific deficits. For A1, bilateral deactivation resulted in higher error rates (performance dropping to ∼45%) but relatively small errors (mostly within 30° of the target). In contrast, bilateral deactivation of DZ produced somewhat fewer errors (performance dropping to only ∼60% correct), but the errors tended to be larger, often into the incorrect hemifield. Therefore individual deactivation of either A1 or DZ produced specific and unique sound localization deficits. The results of the present study reveal that DZ plays a role in sound localization. Along with previous anatomical and physiological data, these behavioral data support the view that A1 and DZ are distinct cortical areas. Finally, the findings that deactivation of either A1 or DZ alone produces partial sound localization deficits, whereas deactivation of either posterior auditory field (PAF) or anterior ectosylvian sulcus (AES) produces profound sound localization deficits, suggests that PAF and AES make more significant contributions to sound localization than either A1 or DZ.


2005 ◽  
Vol 93 (1) ◽  
pp. 210-222 ◽  
Author(s):  
Michael P. Harms ◽  
John J. Guinan ◽  
Irina S. Sigalovsky ◽  
Jennifer R. Melcher

Functional magnetic resonance imaging (fMRI) of human auditory cortex has demonstrated a striking range of temporal waveshapes in responses to sound. Prolonged (30 s) low-rate (2/s) noise burst trains elicit “sustained” responses, whereas high-rate (35/s) trains elicit “phasic” responses with peaks just after train onset and offset. As a step toward understanding the significance of these responses for auditory processing, the present fMRI study sought to resolve exactly which features of sound determine cortical response waveshape. The results indicate that sound temporal envelope characteristics, but not sound level or bandwidth, strongly influence response waveshapes, and thus the underlying time patterns of neural activity. The results show that sensitivity to sound temporal envelope holds in both primary and nonprimary cortical areas, but nonprimary areas show more pronounced phasic responses for some types of stimuli (higher-rate trains, continuous noise), indicating more prominent neural activity at sound onset and offset. It has been hypothesized that the neural activity underlying the onset and offset peaks reflects the beginning and end of auditory perceptual events. The present data support this idea because sound temporal envelope, the sound characteristic that most strongly influences whether fMRI responses are phasic, also strongly influences whether successive stimuli (e.g., the bursts of a train) are perceptually grouped into a single auditory event. Thus fMRI waveshape may provide a window onto neural activity patterns that reflect the segmentation of our auditory environment into distinct, meaningful events.


2021 ◽  
Author(s):  
Dana L Boebinger ◽  
Sam V Norman-Haignere ◽  
Josh H McDermott ◽  
Nancy G Kanwisher

Converging evidence suggests that neural populations within human non-primary auditory cortex respond selectively to music. These neural populations respond strongly to a wide range of music stimuli, and weakly to other natural sounds and to synthetic control stimuli matched to music in many acoustic properties, suggesting that they are driven by high-level musical features. What are these features? Here we used fMRI to test the extent to which musical structure in pitch and time contribute to music-selective neural responses. We used voxel decomposition to derive music-selective response components in each of 15 participants individually, and then measured the response of these components to synthetic music clips in which we selectively disrupted musical structure by scrambling either the note pitches and/or onset times. Both types of scrambling produced lower responses compared to when melodic or rhythmic structure was intact. This effect was much stronger in the music-selective component than in the other response components, even those with substantial spatial overlap with the music component. We further found no evidence for any cortical regions sensitive to pitch but not time structure, or vice versa. Our results suggest that the processing of melody and rhythm are intertwined within auditory cortex.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Connie Cheung ◽  
Liberty S Hamilton ◽  
Keith Johnson ◽  
Edward F Chang

In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information.


Science ◽  
2020 ◽  
Vol 369 (6507) ◽  
pp. eabb0184 ◽  
Author(s):  
Philippe Domenech ◽  
Sylvain Rheims ◽  
Etienne Koechlin

Everyday life often requires arbitrating between pursuing an ongoing action plan by possibly adjusting it versus exploring a new action plan instead. Resolving this so-called exploitation-exploration dilemma involves the medial prefrontal cortex (mPFC). Using human intracranial electrophysiological recordings, we discovered that neural activity in the ventral mPFC infers and tracks the reliability of the ongoing plan to proactively encode upcoming action outcomes as either learning signals or potential triggers to explore new plans. By contrast, the dorsal mPFC exhibits neural responses to action outcomes, which results in either improving or abandoning the ongoing plan. Thus, the mPFC resolves the exploitation-exploration dilemma through a two-stage, predictive coding process: a proactive ventromedial stage that constructs the functional signification of upcoming action outcomes and a reactive dorsomedial stage that guides behavior in response to action outcomes.


2017 ◽  
Author(s):  
Alexander J. Billig ◽  
Matthew H. Davis ◽  
Robert P. Carlyon

AbstractAuditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources, based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure tones H and L presented in the repeating pattern HLH-HLH-, which can form a bistable percept, heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L—) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using magneto-encephalography. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence, or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization.Significance StatementCan we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source, or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception we demonstrate that the number of objects we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.


Sign in / Sign up

Export Citation Format

Share Document