scholarly journals Discerning the functional networks behind processing of music and speech through human vocalizations

2018 ◽  
Author(s):  
Arafat Angulo-Perkins ◽  
Luis Concha

ABSTRACT Musicality refers to specific biological traits that allow us to perceive, generate and enjoy music. These abilities can be studied at different organizational levels (e.g., behavioural, physiological, evolutionary), and all of them reflect that music and speech processing are two different cognitive domains. Previous research has shown evidence of this functional divergence in auditory cortical regions in the superior temporal gyrus (such as the planum polare), showing increased activity upon listening to music, as compared to other complex acoustic signals. Here, we examine brain activity underlying vocal music and speech perception, while we compare musicians and non-musicians. We designed a stimulation paradigm using the same voice to produce spoken sentences, hummed melodies, and sung sentences; the same sentences were used in speech and song categories, and the same melodies were used in the musical categories (song and hum). Participants listened to this paradigm while we acquired functional magnetic resonance images (fMRI). Different analyses demonstrated greater involvement of specific auditory and motor regions during music perception, as compared to speech vocalizations. This music sensitive network includes bilateral activation of the planum polare and temporale, as well as a group of regions lateralized to the right hemisphere that included the supplementary motor area, premotor cortex and the inferior frontal gyrus. Our results show that the simple act of listening to music generates stronger activation of motor regions, possibly preparing us to move following the beat. Vocal musical listening, with and without lyrics, is also accompanied by a higher modulation of specific secondary auditory cortices such as the planum polare, confirming its crucial role in music processing independently of previous musical training. This study provides more evidence showing that music perception enhances audio-sensorimotor activity, crucial for clinical approaches exploring music based therapies to improve communicative and motor skills.

2005 ◽  
Vol 93 (2) ◽  
pp. 1020-1034 ◽  
Author(s):  
Eiichi Naito ◽  
Per E. Roland ◽  
Christian Grefkes ◽  
H. J. Choi ◽  
Simon Eickhoff ◽  
...  

We have previously shown that motor areas are engaged when subjects experience illusory limb movements elicited by tendon vibration. However, traditionally cytoarchitectonic area 2 is held responsible for kinesthesia. Here we use functional magnetic resonance imaging and cytoarchitectural mapping to examine whether area 2 is engaged in kinesthesia, whether it is engaged bilaterally because area 2 in non-human primates has strong callosal connections, which other areas are active members of the network for kinesthesia, and if there is a dominance for the right hemisphere in kinesthesia as has been suggested. Ten right-handed blindfolded healthy subjects participated. The tendon of the extensor carpi ulnaris muscles of the right or left hand was vibrated at 80 Hz, which elicited illusory palmar flexion in an immobile hand (illusion). As control we applied identical stimuli to the skin over the processus styloideus ulnae, which did not elicit any illusions (vibration). We found robust activations in cortical motor areas [areas 4a, 4p, 6; dorsal premotor cortex (PMD) and bilateral supplementary motor area (SMA)] and ipsilateral cerebellum during kinesthetic illusions (illusion-vibration). The illusions also activated contralateral area 2 and right area 2 was active in common irrespective of illusions of right or left hand. Right areas 44, 45, anterior part of intraparietal region (IP1) and caudo-lateral part of parietal opercular region (OP1), cortex rostral to PMD, anterior insula and superior temporal gyrus were also activated in common during illusions of right or left hand. These right-sided areas were significantly more activated than the corresponding areas in the left hemisphere. The present data, together with our previous results, suggest that human kinesthesia is associated with a network of active brain areas that consists of motor areas, cerebellum, and the right fronto-parietal areas including high-order somatosensory areas. Furthermore, our results provide evidence for a right hemisphere dominance for perception of limb movement.


2015 ◽  
Vol 122 (2) ◽  
pp. 250-261 ◽  
Author(s):  
Edward F. Chang ◽  
Kunal P. Raygor ◽  
Mitchel S. Berger

Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in “dorsal” and “ventral” pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.


2008 ◽  
Vol 20 (3) ◽  
pp. 541-552 ◽  
Author(s):  
Eveline Geiser ◽  
Tino Zaehle ◽  
Lutz Jancke ◽  
Martin Meyer

The present study investigates the neural correlates of rhythm processing in speech perception. German pseudosentences spoken with an exaggerated (isochronous) or a conversational (nonisochronous) rhythm were compared in an auditory functional magnetic resonance imaging experiment. The subjects had to perform either a rhythm task (explicit rhythm processing) or a prosody task (implicit rhythm processing). The study revealed bilateral activation in the supplementary motor area (SMA), extending into the cingulate gyrus, and in the insulae, extending into the right basal ganglia (neostriatum), as well as activity in the right inferior frontal gyrus (IFG) related to the performance of the rhythm task. A direct contrast between isochronous and nonisochronous sentences revealed differences in lateralization of activation for isochronous processing as a function of the explicit and implicit tasks. Explicit processing revealed activation in the right posterior superior temporal gyrus (pSTG), the right supramarginal gyrus, and the right parietal operculum. Implicit processing showed activation in the left supramarginal gyrus, the left pSTG, and the left parietal operculum. The present results indicate a function of the SMA and the insula beyond motor timing and speak for a role of these brain areas in the perception of acoustically temporal intervals. Secondly, the data speak for a specific task-related function of the right IFG in the processing of accent patterns. Finally, the data sustain the assumption that the right secondary auditory cortex is involved in the explicit perception of auditory suprasegmental cues and, moreover, that activity in the right secondary auditory cortex can be modulated by top-down processing mechanisms.


2013 ◽  
Vol 25 (12) ◽  
pp. 2179-2188 ◽  
Author(s):  
Katya Krieger-Redwood ◽  
M. Gareth Gaskell ◽  
Shane Lindsay ◽  
Elizabeth Jefferies

Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites—PMC, posterior superior temporal gyrus, and occipital pole—and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.


2020 ◽  
Vol 32 (5) ◽  
pp. 877-888
Author(s):  
Maxime Niesen ◽  
Marc Vander Ghinst ◽  
Mathieu Bourguignon ◽  
Vincent Wens ◽  
Julie Bertels ◽  
...  

Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top–down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.


2021 ◽  
Author(s):  
Michiko Kawai ◽  
Yuichi Abe ◽  
Masato Yumoto ◽  
Masaya Kubota

AbstractLandau–Kleffner syndrome (LKS) is a rare neurological disorder characterized by acquired aphasia. LKS presents with distinctive electroencephalography (EEG) findings, including diffuse continuous spike and wave complexes (CSW), particularly during sleep. There has been little research on the mechanisms of aphasia and its origin within the brain and how it recovers. We diagnosed LKS in a 4-year-old female with an epileptogenic zone located primarily in the right superior temporal gyrus or STG (nondominant side). In the course of her illness, she had early signs of motor aphasia recovery but was slow to regain language comprehension and recover from hearing loss. We suggest that the findings from our patient's brain imaging and the disparity between her recovery from expressive and receptive aphasias are consistent with the dual-stream model of speech processing in which the nondominant hemisphere also plays a significant role in language comprehension. Unlike aphasia in adults, the right-hemisphere disorder has been reported to cause delays in language comprehension and gestures in early childhood. In the period of language acquisition, it requires a process of understanding what the words mean by integrating and understanding the visual, auditory, and contextual information. It is thought that the right hemisphere works predominantly with respect to its integrating role.


2019 ◽  
Author(s):  
Matthew Heard ◽  
Yune S. Lee

AbstractA growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis). Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, the intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula—neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.


2013 ◽  
Vol 25 (7) ◽  
pp. 1062-1077 ◽  
Author(s):  
Carol A. Seger ◽  
Brian J. Spiering ◽  
Anastasia G. Sares ◽  
Sarah I. Quraini ◽  
Catherine Alpeter ◽  
...  

This study investigates the functional neuroanatomy of harmonic music perception with fMRI. We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.


2001 ◽  
Vol 13 (7) ◽  
pp. 994-1005 ◽  
Author(s):  
Athena Vouloumanos ◽  
Kent A. Kiehl ◽  
Janet F. Werker ◽  
Peter F. Liddle

The detection of speech in an auditory stream is a requisite first step in processing spoken language. In this study, we used event-related fMRI to investigate the neural substrates mediating detection of speech compared with that of nonspeech auditory stimuli. Unlike previous studies addressing this issue, we contrasted speech with nonspeech analogues that were matched along key temporal and spectral dimensions. In an oddball detection task, listeners heard nonsense speech sounds, matched sine wave analogues (complex nonspeech), or single tones (simple nonspeech). Speech stimuli elicited significantly greater activation than both complex and simple nonspeech stimuli in classic receptive language areas, namely the middle temporal gyri bilaterally and in a locus lateralized to the left posterior superior temporal gyrus. In addition, speech activated a small cluster of the right inferior frontal gyrus. The activation of these areas in a simple detection task, which requires neither identification nor linguistic analysis, suggests they play a fundamental role in speech processing.


2007 ◽  
Vol 10 (2) ◽  
pp. 189-199 ◽  
Author(s):  
DAVID W. GREEN ◽  
JENNY CRINION ◽  
CATHY J. PRICE

Given that there are neural markers for the acquisition of a non-verbal skill, we review evidence of neural markers for the acquisition of vocabulary. Acquiring vocabulary is critical to learning one's native language and to learning other languages. Acquisition requires the ability to link an object concept (meaning) to sound. Is there a region sensitive to vocabulary knowledge? For monolingual English speakers, increased vocabulary knowledge correlates with increased grey matter density in a region of the parietal cortex that is well-located to mediate an association between meaning and sound (the posterior supramarginal gyrus). Further this region also shows sensitivity to acquiring a second language. Relative to monolingual English speakers, Italian–English bilinguals show increased grey matter density in the same region. Differences as well as commonalities might exist in the neural markers for vocabulary where lexical distinctions are also signalled by tone. Relative to monolingual English, Chinese multilingual speakers, like European multilinguals, show increased grey matter density in the parietal region observed previously. However, irrespective of ethnicity, Chinese speakers (both Asian and European) also show highly significant increased grey matter density in two right hemisphere regions (the superior temporal gyrus and the inferior frontal gyrus). They also show increased grey matter density in two left hemisphere regions (middle temporal and superior temporal gyrus). Such increases may reflect additional resources required to process tonal distinctions for lexical purposes or to store tonal differences in order to distinguish lexical items. We conclude with a discussion of future lines of enquiry.


Sign in / Sign up

Export Citation Format

Share Document