scholarly journals Speech-Driven Spectrotemporal Receptive Fields Beyond the Auditory Cortex

2021 ◽  
Author(s):  
Jonathan Henry Venezia ◽  
Virginia Richards ◽  
Gregory Hickok

We recently developed a method to estimate speech-driven spectrotemporal receptive fields (STRFs) using fMRI. The method uses spectrotemporal modulation filtering, a form of acoustic distortion that renders speech sometimes intelligible and sometimes unintelligible. Using this method, we found significant STRF tuning only in classic auditory regions throughout the superior temporal lobes. However, our analysis was not optimized to detect small clusters of tuned STRFs as might be expected in non-auditory regions. Here, we re-analyze our data using a more sensitive multivariate procedure, and we identify STRF tuning in non-auditory regions including the left dorsal premotor cortex (left dPM), left inferior frontal gyrus (LIFG), and bilateral calcarine sulcus (calcS). All three regions responded more to intelligible than unintelligible speech, but left dPM and calcS responded significantly to vocal pitch and demonstrated strong functional connectivity with early auditory regions. However, only left dPM’s STRF predicted activation on trials rated as unintelligible by listeners, a hallmark auditory profile. LIFG, on the other hand, responded almost exclusively to intelligible speech and was functionally connected with classic speech-language regions in the superior temporal sulcus and middle temporal gyrus. LIFG’s STRF was also (weakly) able to predict activation on unintelligible trials, suggesting the presence of a partial ‘acoustic trace’ in the region. We conclude that left dPM is part of the human dorsal laryngeal motor cortex, a region previously shown to be capable of operating in an ‘auditory mode’ to encode vocal pitch. Further, given previous observations that LIFG is involved in syntactic working memory and/or processing of linear order, we conclude that LIFG is part of a higher-order speech circuit that exerts a top-down influence on processing of speech acoustics. Finally, because calcS is modulated by emotion, we speculate that changes in the quality of vocal pitch may have contributed to its response.

2009 ◽  
Vol 21 (4) ◽  
pp. 821-836 ◽  
Author(s):  
Benjamin Straube ◽  
Antonia Green ◽  
Susanne Weis ◽  
Anjan Chatterjee ◽  
Tilo Kircher

In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates the neural basis of memory for bimodal speech and gesture representations. In this fMRI study, 12 participants were presented with video clips showing an actor performing meaningful metaphoric gestures (MG), unrelated, free gestures (FG), and no arm and hand movements (NG) accompanying sentences with an abstract content. After the fMRI session, the participants performed a recognition task. Behaviorally, the participants showed the highest hit rate for sentences accompanied by meaningful metaphoric gestures. Despite comparable old/new discrimination performances (d′) for the three conditions, we obtained distinct memory-related left-hemispheric activations in the inferior frontal gyrus (IFG), the premotor cortex (BA 6), and the middle temporal gyrus (MTG), as well as significant correlations between hippocampal activation and memory performance in the metaphoric gesture condition. In contrast, unrelated speech and gesture information (FG) was processed in areas of the left occipito-temporal and cerebellar region and the right IFG just like the no-gesture condition (NG). We propose that the specific left-lateralized activation pattern for the metaphoric speech–gesture sentences reflects semantic integration of speech and gestures. These results provide novel evidence about the neural integration of abstract speech and gestures as it contributes to subsequent memory performance.


2005 ◽  
Vol 17 (2) ◽  
pp. 273-281 ◽  
Author(s):  
Marco Tettamanti ◽  
Giovanni Buccino ◽  
Maria Cristina Saccuman ◽  
Vittorio Gallese ◽  
Massimo Danna ◽  
...  

Observing actions made by others activates the cortical circuits responsible for the planning and execution of those same actions. This observation–execution matching system (mirror-neuron system) is thought to play an important role in the understanding of actions made by others. In an fMRI experiment, we tested whether this system also becomes active during the processing of action-related sentences. Participants listened to sentences describing actions performed with the mouth, the hand, or the leg. Abstract sentences of comparable syntactic structure were used as control stimuli. The results showed that listening to action-related sentences activates a left fronto-parieto-temporal network that includes the pars opercularis of the inferior frontal gyrus (Broca's area), those sectors of the premotor cortex where the actions described are motorically coded, as well as the inferior parietal lobule, the intraparietal sulcus, and the posterior middle temporal gyrus. These data provide the first direct evidence that listening to sentences that describe actions engages the visuomotor circuits which subserve action execution and observation.


2013 ◽  
Vol 109 (1) ◽  
pp. 261-272 ◽  
Author(s):  
Alain de Cheveigné ◽  
Jean-Marc Edeline ◽  
Quentin Gaucher ◽  
Boris Gourévitch

Local field potentials (LFPs) recorded in the auditory cortex of mammals are known to reveal weakly selective and often multimodal spectrotemporal receptive fields in contrast to spiking activity. This may in part reflect the wider “listening sphere” of LFPs relative to spikes due to the greater current spread at low than high frequencies. We recorded LFPs and spikes from auditory cortex of guinea pigs using 16-channel electrode arrays. LFPs were processed by a component analysis technique that produces optimally tuned linear combinations of electrode signals. Linear combinations of LFPs were found to have sharply tuned responses, closer to spike-related tuning. The existence of a sharply tuned component implies that a cortical neuron (or group of neurons) capable of forming a linear combination of its inputs has access to that information. Linear combinations of signals from electrode arrays reveal information latent in the subspace spanned by multichannel LFP recordings and are justified by the fact that the observations themselves are linear combinations of neural sources.


2006 ◽  
Vol 18 (11) ◽  
pp. 1789-1798 ◽  
Author(s):  
Angela Bartolo ◽  
Francesca Benuzzi ◽  
Luca Nocetti ◽  
Patrizia Baraldi ◽  
Paolo Nichelli

Humor is a unique ability in human beings. Suls [A two-stage model for the appreciation of jokes and cartoons. In P. E. Goldstein & J. H. McGhee (Eds.), The psychology of humour. Theoretical perspectives and empirical issues. New York: Academic Press, 1972, pp. 81–100] proposed a two-stage model of humor: detection and resolution of incongruity. Incongruity is generated when a prediction is not confirmed in the final part of a story. To comprehend humor, it is necessary to revisit the story, transforming an incongruous situation into a funny, congruous one. Patient and neuroimaging studies carried out until now lead to different outcomes. In particular, patient studies found that right brain-lesion patients have difficulties in humor comprehension, whereas neuroimaging studies suggested a major involvement of the left hemisphere in both humor detection and comprehension. To prevent activation of the left hemisphere due to language processing, we devised a nonverbal task comprising cartoon pairs. Our findings demonstrate activation of both the left and the right hemispheres when comparing funny versus nonfunny cartoons. In particular, we found activation of the right inferior frontal gyrus (BA 47), the left superior temporal gyrus (BA 38), the left middle temporal gyrus (BA 21), and the left cerebellum. These areas were also activated in a nonverbal task exploring attribution of intention [Brunet, E., Sarfati, Y., Hardy-Bayle, M. C., & Decety, J. A PET investigation of the attribution of intentions with a nonverbal task. Neuroimage, 11, 157–166, 2000]. We hypothesize that the resolution of incongruity might occur through a process of intention attribution. We also asked subjects to rate the funniness of each cartoon pair. A parametric analysis showed that the left amygdala was activated in relation to subjective amusement. We hypothesize that the amygdala plays a key role in giving humor an emotional dimension.


2010 ◽  
Vol 104 (2) ◽  
pp. 784-798 ◽  
Author(s):  
Noopur Amin ◽  
Patrick Gill ◽  
Frédéric E. Theunissen

We estimated the spectrotemporal receptive fields of neurons in the songbird auditory thalamus, nucleus ovoidalis, and compared the neural representation of complex sounds in the auditory thalamus to those found in the upstream auditory midbrain nucleus, mesencephalicus lateralis dorsalis (MLd), and the downstream auditory pallial region, field L. Our data refute the idea that the primary sensory thalamus acts as a simple, relay nucleus: we find that the auditory thalamic receptive fields obtained in response to song are more complex than the ones found in the midbrain. Moreover, we find that linear tuning diversity and complexity in ovoidalis (Ov) are closer to those found in field L than in MLd. We also find prevalent tuning to intermediate spectral and temporal modulations, a feature that is unique to Ov. Thus even a feed-forward model of the sensory processing chain, where neural responses in the sensory thalamus reveals intermediate response properties between those in the sensory periphery and those in the primary sensory cortex, is inadequate in describing the tuning found in Ov. Based on these results, we believe that the auditory thalamic circuitry plays an important role in generating novel complex representations for specific features found in natural sounds.


2019 ◽  
Vol 31 (4) ◽  
pp. 560-573 ◽  
Author(s):  
Kenny Skagerlund ◽  
Taylor Bolt ◽  
Jason S. Nomi ◽  
Mikael Skagenholt ◽  
Daniel Västfjäll ◽  
...  

What are the underlying neurocognitive mechanisms that give rise to mathematical competence? This study investigated the relationship between tests of mathematical ability completed outside the scanner and resting-state functional connectivity (FC) of cytoarchitectonically defined subdivisions of the parietal cortex in adults. These parietal areas are also involved in executive functions (EFs). Therefore, it remains unclear whether there are unique networks for mathematical processing. We investigate the neural networks for mathematical cognition and three measures of EF using resting-state fMRI data collected from 51 healthy adults. Using 10 ROIs in seed to whole-brain voxel-wise analyses, the results showed that arithmetical ability was correlated with FC between the right anterior intraparietal sulcus (hIP1) and the left supramarginal gyrus and between the right posterior intraparietal sulcus (hIP3) and the left middle frontal gyrus and the right premotor cortex. The connection between the posterior portion of the left angular gyrus and the left inferior frontal gyrus was also correlated with mathematical ability. Covariates of EF eliminated connectivity patterns with nodes in inferior frontal gyrus, angular gyrus, and middle frontal gyrus, suggesting neural overlap. Controlling for EF, we found unique connections correlated with mathematical ability between the right hIP1 and the left supramarginal gyrus and between hIP3 bilaterally to premotor cortex bilaterally. This is partly in line with the “mapping hypothesis” of numerical cognition in which the right intraparietal sulcus subserves nonsymbolic number processing and connects to the left parietal cortex, responsible for calculation procedures. We show that FC within this circuitry is a significant predictor of math ability in adulthood.


2018 ◽  
Author(s):  
Arafat Angulo-Perkins ◽  
Luis Concha

ABSTRACT Musicality refers to specific biological traits that allow us to perceive, generate and enjoy music. These abilities can be studied at different organizational levels (e.g., behavioural, physiological, evolutionary), and all of them reflect that music and speech processing are two different cognitive domains. Previous research has shown evidence of this functional divergence in auditory cortical regions in the superior temporal gyrus (such as the planum polare), showing increased activity upon listening to music, as compared to other complex acoustic signals. Here, we examine brain activity underlying vocal music and speech perception, while we compare musicians and non-musicians. We designed a stimulation paradigm using the same voice to produce spoken sentences, hummed melodies, and sung sentences; the same sentences were used in speech and song categories, and the same melodies were used in the musical categories (song and hum). Participants listened to this paradigm while we acquired functional magnetic resonance images (fMRI). Different analyses demonstrated greater involvement of specific auditory and motor regions during music perception, as compared to speech vocalizations. This music sensitive network includes bilateral activation of the planum polare and temporale, as well as a group of regions lateralized to the right hemisphere that included the supplementary motor area, premotor cortex and the inferior frontal gyrus. Our results show that the simple act of listening to music generates stronger activation of motor regions, possibly preparing us to move following the beat. Vocal musical listening, with and without lyrics, is also accompanied by a higher modulation of specific secondary auditory cortices such as the planum polare, confirming its crucial role in music processing independently of previous musical training. This study provides more evidence showing that music perception enhances audio-sensorimotor activity, crucial for clinical approaches exploring music based therapies to improve communicative and motor skills.


2021 ◽  
Author(s):  
Dongmei Gao ◽  
Mingzhou Gao ◽  
Li An ◽  
Yanhong Yu ◽  
Jieqiong Wang ◽  
...  

Abstract Background: Most studies on the mechanism behind premenstrual syndrome (PMS) have focused on fluctuating hormones, but little evidence exists regarding functional abnormalities in the affected brain regions of college students. Thus, the aim of this study is to localize PMS's abnormal brain regions by BOLD-fMRI in college students.Methods: Thirteen PMS patients and fifteen healthy control (HC) subjects underwent a BOLD-fMRI scan during the luteal phase induced by depressive emotion pictures. The BOLD-fMRI data were processed by SPM 8 software and rest software based on MATLAB platform. Each cluster volume threshold (cluster) was greater than 389 continuous voxels, and the brain area with single voxel threshold P < 0.05 (after correction) was defined as the area with a significant difference. The emotion report form and the instruction implementation checklist were used to evaluate the emotion induced by picture.Results: Compared to the HC, right inferior occipital gyrus, right middle occipital gyrus, right lingual gyrus, right fusiform gyrus, right inferior temporal gyrus, cerebelum_crus1_R,cerebelum_6_R, culmen, the cerebellum anterior lobe, tuber, cerebellar tonsil of PMS patients were enhanced activation. Sub-lobar,sub-gyral,extra-nuclear,right orbit part of superior frontal gyrus, right middle temporal gyrus, right Orbit part of inferior frontal gyrus, limbic lobe, right insula, bilateral anterior and adjacent cingulate gyrus, bilateral caudate, caudate head, bilateral putamen, left globus pallidus were decreased activation.Conclusion: Our findings may improve our understanding of the neural mechanisms involved in PMS.


2011 ◽  
Vol 106 (2) ◽  
pp. 500-514 ◽  
Author(s):  
Joseph W. Schumacher ◽  
David M. Schneider ◽  
Sarah M. N. Woolley

The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals.


Neurology ◽  
2017 ◽  
Vol 89 (17) ◽  
pp. 1804-1810 ◽  
Author(s):  
Inbal Maidan ◽  
Keren Rosenberg-Katz ◽  
Yael Jacob ◽  
Nir Giladi ◽  
Jeffrey M. Hausdorff ◽  
...  

Objective:To compare the effects of 2 forms of exercise, i.e., a 6-week trial of treadmill training with virtual reality (TT + VR) that targets motor and cognitive aspects of safe ambulation and a 6-week trial of treadmill training alone (TT), on brain activation in patients with Parkinson disease (PD).Methods:As part of a randomized controlled trial, patients were randomly assigned to 6 weeks of TT (n = 17, mean age 71.5 ± 1.5 years, disease duration 11.6 ± 1.6 years; 70% men) or TT + VR (n = 17, mean age 71.2 ± 1.7 years, disease duration 7.9 ± 1.4 years; 65% men). A previously validated fMRI imagery paradigm assessed changes in neural activation pretraining and post-training. Participants imagined themselves walking in 2 virtual scenes projected in the fMRI: (1) a clear path and (2) a path with virtual obstacles. Whole brain and region of interest analyses were performed.Results:Brain activation patterns were similar between training arms before the interventions. After training, participants in the TT + VR arm had lower activation than the TT arm in Brodmann area 10 and the inferior frontal gyrus (cluster level familywise error–corrected [FWEcorr] p < 0.012), while the TT arm had lower activation than TT + VR in the cerebellum and middle temporal gyrus (cluster level FWEcorr p < 0.001). Changes in fall frequency and brain activation were correlated in the TT + VR arm.Conclusions:Exercise modifies brain activation patterns in patients with PD in a mode-specific manner. Motor-cognitive training decreased the reliance on frontal regions, which apparently resulted in improved function, perhaps reflecting increased brain efficiency.


Sign in / Sign up

Export Citation Format

Share Document