Short-Term Sound Temporal Envelope Characteristics Determine Multisecond Time Patterns of Activity in Human Auditory Cortex as Shown by fMRI

2005 ◽  
Vol 93 (1) ◽  
pp. 210-222 ◽  
Author(s):  
Michael P. Harms ◽  
John J. Guinan ◽  
Irina S. Sigalovsky ◽  
Jennifer R. Melcher

Functional magnetic resonance imaging (fMRI) of human auditory cortex has demonstrated a striking range of temporal waveshapes in responses to sound. Prolonged (30 s) low-rate (2/s) noise burst trains elicit “sustained” responses, whereas high-rate (35/s) trains elicit “phasic” responses with peaks just after train onset and offset. As a step toward understanding the significance of these responses for auditory processing, the present fMRI study sought to resolve exactly which features of sound determine cortical response waveshape. The results indicate that sound temporal envelope characteristics, but not sound level or bandwidth, strongly influence response waveshapes, and thus the underlying time patterns of neural activity. The results show that sensitivity to sound temporal envelope holds in both primary and nonprimary cortical areas, but nonprimary areas show more pronounced phasic responses for some types of stimuli (higher-rate trains, continuous noise), indicating more prominent neural activity at sound onset and offset. It has been hypothesized that the neural activity underlying the onset and offset peaks reflects the beginning and end of auditory perceptual events. The present data support this idea because sound temporal envelope, the sound characteristic that most strongly influences whether fMRI responses are phasic, also strongly influences whether successive stimuli (e.g., the bursts of a train) are perceptually grouped into a single auditory event. Thus fMRI waveshape may provide a window onto neural activity patterns that reflect the segmentation of our auditory environment into distinct, meaningful events.

2021 ◽  
Vol 13 ◽  
Author(s):  
Fuxin Ren ◽  
Wen Ma ◽  
Wei Zong ◽  
Ning Li ◽  
Xiao Li ◽  
...  

Presbycusis (PC) is characterized by preferential hearing loss at high frequencies and difficulty in speech recognition in noisy environments. Previous studies have linked PC to cognitive impairment, accelerated cognitive decline and incident Alzheimer’s disease. However, the neural mechanisms of cognitive impairment in patients with PC remain unclear. Although resting-state functional magnetic resonance imaging (rs-fMRI) studies have explored low-frequency oscillation (LFO) connectivity or amplitude of PC-related neural activity, it remains unclear whether the abnormalities occur within all frequency bands or within specific frequency bands. Fifty-one PC patients and fifty-one well-matched normal hearing controls participated in this study. The LFO amplitudes were investigated using the amplitude of low-frequency fluctuation (ALFF) at different frequency bands (slow-4 and slow-5). PC patients showed abnormal LFO amplitudes in the Heschl’s gyrus, dorsolateral prefrontal cortex (dlPFC), frontal eye field and key nodes of the speech network exclusively in slow-4, which suggested that abnormal spontaneous neural activity in PC was frequency dependent. Our findings also revealed that stronger functional connectivity between the dlPFC and the posterodorsal stream of auditory processing, as well as lower functional coupling between the PCC and key nodes of the DMN, which were associated with cognitive impairments in PC patients. Our study might underlie the cross-modal plasticity and higher-order cognitive participation of the auditory cortex after partial hearing deprivation. Our findings indicate that frequency-specific analysis of ALFF could provide valuable insights into functional alterations in the auditory cortex and non-auditory regions involved in cognitive impairment associated with PC.


2021 ◽  
Author(s):  
Drew Cappotto ◽  
HiJee Kang ◽  
Kongyan Li ◽  
Lucia Melloni ◽  
Jan Schnupp ◽  
...  

AbstractRecent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.


2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.


2019 ◽  
Author(s):  
Fabiano Baroni ◽  
Benjamin Morillon ◽  
Agnès Trébuchon ◽  
Catherine Liégeois-Chauvel ◽  
Itsaso Olasagasti ◽  
...  

AbstractNeural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.Author summaryLike most animal vocalizations, speech results from a pseudo-rhythmic process that reflects the convergence of motor and auditory neural substrates and the natural resonance properties of the vocal apparatus towards efficient communication. Here, we leverage the excellent temporal and spatial resolution of intracranial EEG to demonstrate that neural activity in human early auditory cortical areas during speech perception exhibits a dual-scale spectral profile of power changes, with speech increasing power in low (delta-theta) and high (gamma - high-gamma) frequency ranges, while decreasing power in intermediate (alpha-beta) frequencies. Single-trial multivariate decoding also resulted in a bimodal spectral profile of information content, with better decoding at low and high frequencies than at intermediate ones. From both spectral and informational perspectives, these patterns are consistent with the activity of a relatively simple computational model comprising two reciprocally connected excitatory/inhibitory sub-networks operating at different (low and high) timescales. By combining experimental, decoding and modeling approaches, we provide consistent evidence for the existence, information coding value and underlying neuronal architecture of dual timescale processing in human auditory cortex.


2020 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P. Schwartz ◽  
Charlie R. Heller ◽  
Jacob R. Pennington ◽  
Stephen V. David

AbstractThe brain’s representation of sound is influenced by multiple aspects of internal behavioral state. Following engagement in an auditory discrimination task, both generalized arousal and task-specific control signals can influence auditory processing. To isolate effects of these state variables on auditory processing, we recorded single-unit activity from primary auditory cortex (A1) and the inferior colliculus (IC) of ferrets as they engaged in a go/no-go tone detection task while simultaneously monitoring arousal via pupillometry. We used a generalized linear model to isolate the contributions of task engagement and arousal on spontaneous and evoked neural activity. Fluctuations in pupil-indexed arousal were correlated with task engagement, but these two variables could be dissociated in most experiments. In both A1 and IC, individual units could be modulated by task and/or arousal, but the two state variables affected independent neural populations. Arousal effects were more prominent in IC, while arousal and engagement effects occurred with about equal frequency in A1. These results indicate that some changes in neural activity attributed to task engagement in previous studies should in fact be attributed to global fluctuations in arousal. Arousal effects also explain some persistent changes in neural activity observed in passive conditions post-behavior. Together, these results indicate a hierarchy in the auditory system, where generalized arousal enhances activity in the midbrain and cortex, while task-specific changes in neural coding become more prominent in cortex.


1998 ◽  
Vol 35 (3) ◽  
pp. 283-292 ◽  
Author(s):  
MARTY G. WOLDORFF ◽  
STEVEN A. HILLYARD ◽  
CHRIS C. GALLEN ◽  
SCOTT R. HAMPSON ◽  
FLOYD E. BLOOM

2019 ◽  
Author(s):  
Daniel J. Gale ◽  
Corson N. Areshenkoff ◽  
Claire Honda ◽  
Ingrid S. Johnsrude ◽  
J. Randall Flanagan ◽  
...  

AbstractIt is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed two human functional MRI studies involving separate delayed movement tasks and focused on pre-movement neural activity in early auditory cortex, given its direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents. We show that effector-specific information (i.e., movements of the left vs. right hand in Experiment 1, and movements of the hand vs. eye in Experiment 2) can be decoded, well before movement, from neural activity in early auditory cortex. We find that this motor-related information is represented in a separate subregion of auditory cortex than sensory-related information and is present even when movements are cued visually instead of auditorily. These findings suggest that action planning, in addition to preparing the motor system for movement, involves selectively modulating primary sensory areas based on the intended action.


NeuroImage ◽  
2002 ◽  
Vol 15 (1) ◽  
pp. 207-216 ◽  
Author(s):  
Alexander Gutschalk ◽  
Roy D. Patterson ◽  
André Rupp ◽  
Stefan Uppenkamp ◽  
Michael Scherg

2008 ◽  
Vol 99 (3) ◽  
pp. 1152-1162 ◽  
Author(s):  
André Rupp ◽  
Norman Sieroka ◽  
Alexander Gutschalk ◽  
Torsten Dau

Harmonic tone complexes with component phases, adjusted using a variant of a method proposed by Schroeder, can produce pure-tone masked thresholds differing by >20 dB. This phenomenon has been qualitatively explained by the phase characteristics of the auditory filters on the basilar membrane, which differently affect the flat envelopes of the Schroeder-phase maskers. We examined the influence of auditory-filter phase characteristics on the neural representation in the auditory cortex by investigating cortical auditory evoked fields (AEFs). We found that the P1m component exhibited larger amplitudes when a long-duration tone was presented in a repeating linearly downward sweeping (Schroeder positive, or m+) masker than in a repeating linearly upward sweeping (Schroeder negative, or m−) masker. We also examined the neural representation of short-duration tone pulses presented at different temporal positions within a single period of three maskers differing in their component phases ( m+, m−, and sine phase m0). The P1m amplitude varied with the position of the tone pulse in the masker and depended strongly on the masker waveform. The neuromagnetic results in all cases were consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex.


Sign in / Sign up

Export Citation Format

Share Document