scholarly journals Mapping the human auditory cortex using spectrotemporal receptive fields generated with magnetoencephalography

2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.

NeuroImage ◽  
2019 ◽  
Vol 186 ◽  
pp. 647-666 ◽  
Author(s):  
Jonathan H. Venezia ◽  
Steven M. Thurman ◽  
Virginia M. Richards ◽  
Gregory Hickok

NeuroImage ◽  
2021 ◽  
pp. 118222
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

2013 ◽  
Vol 109 (1) ◽  
pp. 261-272 ◽  
Author(s):  
Alain de Cheveigné ◽  
Jean-Marc Edeline ◽  
Quentin Gaucher ◽  
Boris Gourévitch

Local field potentials (LFPs) recorded in the auditory cortex of mammals are known to reveal weakly selective and often multimodal spectrotemporal receptive fields in contrast to spiking activity. This may in part reflect the wider “listening sphere” of LFPs relative to spikes due to the greater current spread at low than high frequencies. We recorded LFPs and spikes from auditory cortex of guinea pigs using 16-channel electrode arrays. LFPs were processed by a component analysis technique that produces optimally tuned linear combinations of electrode signals. Linear combinations of LFPs were found to have sharply tuned responses, closer to spike-related tuning. The existence of a sharply tuned component implies that a cortical neuron (or group of neurons) capable of forming a linear combination of its inputs has access to that information. Linear combinations of signals from electrode arrays reveal information latent in the subspace spanned by multichannel LFP recordings and are justified by the fact that the observations themselves are linear combinations of neural sources.


2001 ◽  
Vol 85 (4) ◽  
pp. 1732-1749 ◽  
Author(s):  
Steven W. Cheung ◽  
Purvis H. Bedenbaugh ◽  
Srikantan S. Nagarajan ◽  
Christoph E. Schreiner

The spatial organization of response parameters in squirrel monkey primary auditory cortex (AI) accessible on the temporal gyrus was determined with the excitatory receptive field to pure tone stimuli. Dense, microelectrode mapping of the temporal gyrus in four animals revealed that characteristic frequency (CF) had a smooth, monotonic gradient that systematically changed from lower values (0.5 kHz) in the caudoventral quadrant to higher values (5–6 kHz) in the rostrodorsal quadrant. The extent of AI on the temporal gyrus was ∼4 mm in the rostrocaudal axis and 2–3 mm in the dorsoventral axis. The entire length of isofrequency contours below 6 kHz was accessible for study. Several independent, spatially organized functional response parameters were demonstrated for the squirrel monkey AI. Latency, the asymptotic minimum arrival time for spikes with increasing sound pressure levels at CF, was topographically organized as a monotonic gradient across AI nearly orthogonal to the CF gradient. Rostral AI had longer latencies (range = 4 ms). Threshold and bandwidth co-varied with the CF. Factoring out the contribution of the CF on threshold variance, residual threshold showed a monotonic gradient across AI that had higher values (range = 10 dB) caudally. The orientation of the threshold gradient was significantly different from the CF gradient. CF-corrected bandwidth, residual Q10, was spatially organized in local patches of coherent values whose loci were specific for each monkey. These data support the existence of multiple, overlying receptive field gradients within AI and form the basis to develop a conceptual framework to understand simple and complex sound coding in mammals.


PLoS ONE ◽  
2015 ◽  
Vol 10 (9) ◽  
pp. e0137915 ◽  
Author(s):  
Rick L. Jenison ◽  
Richard A. Reale ◽  
Amanda L. Armstrong ◽  
Hiroyuki Oya ◽  
Hiroto Kawasaki ◽  
...  

2003 ◽  
Vol 90 (4) ◽  
pp. 2660-2675 ◽  
Author(s):  
Jennifer F. Linden ◽  
Robert C. Liu ◽  
Maneesh Sahani ◽  
Christoph E. Schreiner ◽  
Michael M. Merzenich

The mouse is a promising model system for auditory cortex research because of the powerful genetic tools available for manipulating its neural circuitry. Previous studies have identified two tonotopic auditory areas in the mouse—primary auditory cortex (AI) and anterior auditory field (AAF)— but auditory receptive fields in these areas have not yet been described. To establish a foundation for investigating auditory cortical circuitry and plasticity in the mouse, we characterized receptive-field structure in AI and AAF of anesthetized mice using spectrally complex and temporally dynamic stimuli as well as simple tonal stimuli. Spectrotemporal receptive fields (STRFs) were derived from extracellularly recorded responses to complex stimuli, and frequency-intensity tuning curves were constructed from responses to simple tonal stimuli. Both analyses revealed temporal differences between AI and AAF responses: peak latencies and receptive-field durations for STRFs and first-spike latencies for responses to tone bursts were significantly longer in AI than in AAF. Spectral properties of AI and AAF receptive fields were more similar, although STRF bandwidths were slightly broader in AI than in AAF. Finally, in both AI and AAF, a substantial minority of STRFs were spectrotemporally inseparable. The spectrotemporal interaction typically appeared in the form of clearly disjoint excitatory and inhibitory subfields or an obvious spectrotemporal slant in the STRF. These data provide the first detailed description of auditory receptive fields in the mouse and suggest that although neurons in areas AI and AAF share many response characteristics, area AAF may be specialized for faster temporal processing.


2021 ◽  
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how non-selective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from non-selective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in non-selective and feature-selective populations remain open questions. In this study, using unanesthetized guinea pigs, a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in three auditory processing stages: the thalamus (vMGB), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call-selectivity with about a third of neurons responding to only one or two call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4 stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information, and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1, and set the stage for further mechanistic studies.


2016 ◽  
Author(s):  
Liberty S. Hamilton ◽  
Erik Edwards ◽  
Edward F. Chang

AbstractTo derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals, including phonetic and prosodic cues. Equally important is the detection of acoustic cues that give structure and context to the information we hear, such as sentence boundaries. How the brain organizes this information processing is unknown. Here, using data-driven computational methods on an extensive set of high-density intracranial recordings, we reveal a large-scale partitioning of the entire human speech cortex into two spatially distinct regions that detect important cues for parsing natural speech. These caudal (Zone 1) and rostral (Zone 2) regions work in parallel to detect onsets and prosodic information, respectively, within naturally spoken sentences. In contrast, local processing within each region supports phonetic feature encoding. These findings demonstrate a fundamental organizational property of the human auditory cortex that has been previously unrecognized.


Sign in / Sign up

Export Citation Format

Share Document