Mu rhythm dynamics suggest automatic activation of motor and premotor brain regions during speech processing

2021 ◽  
Vol 60 ◽  
pp. 101006
Author(s):  
Daniela Santos Oliveira ◽  
Tim Saltuklaroglu ◽  
David Thornton ◽  
David Jenson ◽  
Ashley W. Harkrider ◽  
...  
2021 ◽  
Author(s):  
Galit Agmon ◽  
Paz Har-Shai Yahav ◽  
Michal Ben-Shachar ◽  
Elana Zion Golumbic

AbstractDaily life is full of situations where many people converse at the same time. Under these noisy circumstances, individuals can employ different listening strategies to deal with the abundance of sounds around them. In this fMRI study we investigated how applying two different listening strategies – Selective vs. Distributed attention – affects the pattern of neural activity. Specifically, in a simulated ‘cocktail party’ paradigm, we compared brain activation patterns when listeners attend selectively to only one speaker and ignore all others, versus when they distribute their attention and attempt to follow two or four speakers at the same time. Results indicate that the two attention types activate a highly overlapping, bilateral fronto-temporal-parietal network of functionally connected regions. This network includes auditory association cortex (bilateral STG/STS) and higher-level regions related to speech processing and attention (bilateral IFG/insula, right MFG, left IPS). Within this network, responses in specific areas were modulated by the type of attention required. Specifically, auditory and speech-processing regions exhibited higher activity during Distributed attention, whereas fronto-parietal regions were activated more strongly during Selective attention. This pattern suggests that a common perceptual-attentional network is engaged when dealing with competing speech-inputs, regardless of the specific task at hand. At the same time, local activity within nodes of this network varies when implementing different listening strategies, reflecting the different cognitive demands they impose. These results nicely demonstrate the system’s flexibility to adapt its internal computations to accommodate different task requirements and listener goals.Significance StatementHearing many people talk simultaneously poses substantial challenges for the human perceptual and cognitive systems. We compared neural activity when listeners applied two different listening strategy to deal with these competing inputs: attending selectively to one speaker vs. distributing attention among all speakers. A network of functionally connected brain regions, involved in auditory processing, language processing and attentional control was activated when applying both attention types. However, activity within this network was modulated by the type of attention required and the number of competing speakers. These results suggest a common ‘attention to speech’ network, providing the computational infrastructure to deal effectively with multi-speaker input, but with sufficient flexibility to implement different prioritization strategies and to adapt to different listener goals.


2021 ◽  
Author(s):  
Almudena Capilla ◽  
Lydia Arana ◽  
Marta Garcia-Huescar ◽  
Maria Melcon ◽  
Joachim Gross ◽  
...  

Brain oscillations are considered to play a pivotal role in neural communication. However, detailed information regarding the typical oscillatory patterns of individual brain regions is surprisingly scarce. In this study we applied a multivariate data-driven approach to create an atlas of the natural frequencies of the resting human brain on a voxel-by-voxel basis. We analysed resting-state magnetoencephalography (MEG) data from 128 healthy adult volunteers obtained from the Open MEG Archive (OMEGA). Spectral power was computed in source space in 500 ms steps for 82 frequency bins logarithmically spaced from 1.7 to 99.5 Hz. We then applied k-means clustering to detect characteristic spectral profiles and to eventually identify the natural frequency of each voxel. Our results revealed a region-specific organisation of intrinsic oscillatory activity, following both a medial-to-lateral and a posterior-to-anterior gradient of increasing frequency. In particular, medial fronto-temporal regions were characterised by slow rhythms (delta/theta). Posterior regions presented natural frequencies in the alpha band, although with differentiated generators in the precuneus and in sensory-specific cortices (i.e., visual and auditory). Somatomotor regions were distinguished by the mu rhythm, while the lateral prefrontal cortex was characterised by oscillations in the high beta range (>20 Hz). Importantly, the brain map of natural frequencies was highly replicable in two independent subsamples of individuals. To the best of our knowledge, this is the most comprehensive atlas of ongoing oscillatory activity performed to date. Furthermore, the identification of natural frequencies is a fundamental step towards a better understanding of the functional architecture of the human brain.


2021 ◽  
Author(s):  
Anna Uta Rysop ◽  
Lea-Maria Schmitt ◽  
Jonas Obleser ◽  
Gesa Hartwigsen

AbstractSpeech comprehension is often challenged by increased background noise, but can be facilitated via the semantic context of a sentence. This predictability gain relies on an interplay of language-specific semantic and domain-general brain regions. However, age-related differences in the interactions within and between semantic and domain-general networks remain poorly understood. Here we investigated commonalities and differences in degraded speech processing in healthy young and old participants. Participants performed a sentence repetition task while listening to sentences with high and low predictable endings and varying intelligibility. Stimulus intelligibility was adjusted to individual hearing abilities. Older adults showed an undiminished behavioural predictability gain. Likewise, both groups recruited a similar set of semantic and cingulo-opercular brain regions. However, we observed age-related differences in effective connectivity for high predictable speech of increasing intelligibility. Young adults exhibited stronger coupling within the cingulo-opercular network and between a cingulo-opercular and a posterior temporal semantic node. Moreover, these interactions were excitatory in young adults but inhibitory in old adults. Finally, the degree of the inhibitory influence between cingulo-opercular regions was predictive of the behavioural sensitivity towards changes in intelligibility for high predictable sentences in older adults only. Our results demonstrate that the predictability gain is relatively preserved in older adults when stimulus intelligibility is individually adjusted. While young and old participants recruit similar brain regions, differences manifest in network dynamics. Together, these results suggest that ageing affects the network configuration rather than regional activity during successful speech comprehension under challenging listening conditions.


2016 ◽  
Vol 113 (52) ◽  
pp. 15108-15113 ◽  
Author(s):  
Julius Fridriksson ◽  
Grigori Yourganov ◽  
Leonardo Bonilha ◽  
Alexandra Basilakos ◽  
Dirk-Bart Den Ouden ◽  
...  

Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor–phonological aspects vs. lexical–semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal–frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical–semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.


2005 ◽  
Vol 17 (6) ◽  
pp. 939-953 ◽  
Author(s):  
Deborah A. Hall ◽  
Clayton Fussell ◽  
A. Quentin Summerfield

Listeners are able to extract important linguistic information by viewing the talker's face—a process known as “speechreading.” Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that fluent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speech-reading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences reflect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
David Goyer ◽  
Marina A Silveira ◽  
Alexander P George ◽  
Nichole L Beebe ◽  
Ryan M Edelbrock ◽  
...  

Located in the midbrain, the inferior colliculus (IC) is the hub of the central auditory system. Although the IC plays important roles in speech processing, sound localization, and other auditory computations, the organization of the IC microcircuitry remains largely unknown. Using a multifaceted approach in mice, we have identified vasoactive intestinal peptide (VIP) neurons as a novel class of IC principal neurons. VIP neurons are glutamatergic stellate cells with sustained firing patterns. Their extensive axons project to long-range targets including the auditory thalamus, auditory brainstem, superior colliculus, and periaqueductal gray. Using optogenetic circuit mapping, we found that VIP neurons integrate input from the contralateral IC and the dorsal cochlear nucleus. The dorsal cochlear nucleus also drove feedforward inhibition to VIP neurons, indicating that inhibitory circuits within the IC shape the temporal integration of ascending inputs. Thus, VIP neurons are well-positioned to influence auditory computations in a number of brain regions.


Author(s):  
Kristiina Kompus ◽  
Kenneth Hugdahl

Auditory verbal hallucinations (AVH) consist of hearing voices that are not physically present. This perceptual phenomenon is of increasing interest for many research fields. In psychiatric patients, negative and distressing AVH reduce the quality of life, and understanding the mechanisms causing AVH is relevant for advancing clinical interventions. The cognitive and neural mechanisms of AVH are also of interest for research into neural underpinnings of speech processing. This chapter reviews theories of AVH and gives an overview of the main findings from behavioural and neuroimaging studies on individuals with AVH. Individuals who experience AVH show structural changes to auditory perception regions in the superior temporal lobes, as well as altered structural and functional connectivity patterns between auditory perception areas and other brain regions. The active ‘state’ of AVH is associated with increased neural activity in the primary and secondary auditory areas. Taken together, AVH are a perceptual phenomenon that represents a dysfunctional bottom-up activity in the temporal lobe auditory perception region, and altered connectivity with the areas responsible for implementing top-down control in the frontal cortex.


2015 ◽  
Vol 58 (5) ◽  
pp. 1452-1463 ◽  
Author(s):  
Kelene Fercho ◽  
Lee A. Baugh ◽  
Elizabeth K. Hanson

Purpose The purpose of this article was to examine the neural mechanisms associated with increases in speech intelligibility brought about through alphabet supplementation. Method Neurotypical participants listened to dysarthric speech while watching an accompanying video of a hand pointing to the 1st letter spoken of each word on an alphabet display (treatment condition) or a scrambled display (control condition). Their hemodynamic response was measured with functional magnetic resonance imaging, using a sparse sampling event-related paradigm. Speech intelligibility was assessed via a forced-choice auditory identification task throughout the scanning session. Results Alphabet supplementation was associated with significant increases in speech intelligibility. Further, alphabet supplementation increased activation in brain regions known to be involved in both auditory speech and visual letter perception above that seen with the scrambled display. Significant increases in functional activity were observed within the posterior to mid superior temporal sulcus/superior temporal gyrus during alphabet supplementation, regions known to be involved in speech processing and audiovisual integration. Conclusion Alphabet supplementation is an effective tool for increasing the intelligibility of degraded speech and is associated with changes in activity within audiovisual integration sites. Changes in activity within the superior temporal sulcus/superior temporal gyrus may be related to the behavioral increases in intelligibility brought about by this augmented communication method.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Maïté Castro ◽  
Fanny L’héritier ◽  
Jane Plailly ◽  
Anne-Lise Saive ◽  
Alexandra Corneyllie ◽  
...  

Abstract Despite the obvious personal relevance of some musical pieces, the cerebral mechanisms associated with listening to personally familiar music and its effects on subsequent brain functioning have not been specifically evaluated yet. We measured cerebral correlates with functional magnetic resonance imaging (fMRI) while composers listened to three types of musical excerpts varying in personal familiarity and self (familiar own/composition, familiar other/favorite or unfamiliar other/unknown music) followed by sequences of names of individuals also varying in personal familiarity and self (familiar own/own name, familiar other/close friend and unfamiliar other/unknown name). Listening to music with autobiographical contents (familiar own and/or other) recruited a fronto-parietal network including mainly the dorsolateral prefrontal cortex, the supramarginal/angular gyri and the precuneus. Additionally, while listening to familiar other music (favorite) was associated with the activation of reward and emotion networks (e.g. the striatum), familiar own music (compositions) engaged brain regions underpinning self-reference (e.g. the medial prefrontal cortex) and visuo-motor imagery. The present findings further suggested that familiar music with self-related reference (compositions) leads to an enhanced activation of the autobiographical network during subsequent familiar name processing (as compared to music without self-related reference); among these structures, the precuneus seems to play a central role in personally familiar processing.


Sign in / Sign up

Export Citation Format

Share Document