spectrotemporal receptive fields
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 6)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
James Bigelow ◽  
Ryan J Morrill ◽  
Timothy Olsen ◽  
Stephani N Bazarini ◽  
Andrea R Hasenstaub

Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for perceptual processes such as communication and spatial perception. However, much remains unknown about the microcircuit structure of these interactions, including how visual context may affect different cell types across cortical layers, each with diverse responses to sound. The present study examined activity in putative excitatory and inhibitory neurons across cortical layers of A1 in awake male and female mice during auditory, visual, and audiovisual stimulation. We observed a subpopulation of A1 neurons responsive to visual stimuli alone, which were overwhelmingly found in the deep cortical layers and included both excitatory and inhibitory cells. Other neurons for which responses to sound were modulated by visual context were similarly excitatory or inhibitory but were less concentrated within the deepest cortical layers. Important distinctions in visual context sensitivity were observed among different spike rate and timing responses to sound. Spike rate responses were themselves heterogeneous, with stronger responses evoked by sound alone at stimulus onset, but greater sensitivity to visual context by sustained firing activity following transient onset responses. Minimal overlap was observed between units with visual-modulated firing rate responses and spectrotemporal receptive fields (STRFs) which are sensitive to both spike rate and timing changes. Together, our results suggest visual information in A1 is predominantly carried by deep layer inputs and influences sound encoding across cortical layers, and that these influences independently impact qualitatively distinct responses to sound.


NeuroImage ◽  
2021 ◽  
pp. 118222
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

2021 ◽  
Author(s):  
Jonathan Henry Venezia ◽  
Virginia Richards ◽  
Gregory Hickok

We recently developed a method to estimate speech-driven spectrotemporal receptive fields (STRFs) using fMRI. The method uses spectrotemporal modulation filtering, a form of acoustic distortion that renders speech sometimes intelligible and sometimes unintelligible. Using this method, we found significant STRF tuning only in classic auditory regions throughout the superior temporal lobes. However, our analysis was not optimized to detect small clusters of tuned STRFs as might be expected in non-auditory regions. Here, we re-analyze our data using a more sensitive multivariate procedure, and we identify STRF tuning in non-auditory regions including the left dorsal premotor cortex (left dPM), left inferior frontal gyrus (LIFG), and bilateral calcarine sulcus (calcS). All three regions responded more to intelligible than unintelligible speech, but left dPM and calcS responded significantly to vocal pitch and demonstrated strong functional connectivity with early auditory regions. However, only left dPM’s STRF predicted activation on trials rated as unintelligible by listeners, a hallmark auditory profile. LIFG, on the other hand, responded almost exclusively to intelligible speech and was functionally connected with classic speech-language regions in the superior temporal sulcus and middle temporal gyrus. LIFG’s STRF was also (weakly) able to predict activation on unintelligible trials, suggesting the presence of a partial ‘acoustic trace’ in the region. We conclude that left dPM is part of the human dorsal laryngeal motor cortex, a region previously shown to be capable of operating in an ‘auditory mode’ to encode vocal pitch. Further, given previous observations that LIFG is involved in syntactic working memory and/or processing of linear order, we conclude that LIFG is part of a higher-order speech circuit that exerts a top-down influence on processing of speech acoustics. Finally, because calcS is modulated by emotion, we speculate that changes in the quality of vocal pitch may have contributed to its response.


2020 ◽  
Author(s):  
Jean-Pierre R. Falet ◽  
Jonathan Côté ◽  
Veronica Tarka ◽  
Zaida-Escila Martinez-Moreno ◽  
Patrice Voss ◽  
...  

AbstractWe present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.


NeuroImage ◽  
2019 ◽  
Vol 186 ◽  
pp. 647-666 ◽  
Author(s):  
Jonathan H. Venezia ◽  
Steven M. Thurman ◽  
Virginia M. Richards ◽  
Gregory Hickok

2018 ◽  
Author(s):  
Jonathan Henry Venezia ◽  
Steven Matthew Thurman ◽  
Virginia Richards ◽  
Gregory Hickok

Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. ‘Behavioral STRFs’ highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl’s gyrus preferentially processed STMs associated with vocalic information (pitch).


Sign in / Sign up

Export Citation Format

Share Document