scholarly journals Auditory Neuroscience: Sounding Out the Brain Basis of Speech Perception

2019 ◽  
Vol 29 (12) ◽  
pp. R582-R584 ◽  
Author(s):  
Ediz Sohoglu
2019 ◽  
pp. 105-112
Author(s):  
Risto Näätänen ◽  
Teija Kujala ◽  
Gregory Light

This chapter shows that MMN and its magnetoencephalographic (MEG) equivalent MMNm are sensitive indices of aging-related perceptual and cognitive decline. Importantly, the age-related neural changes are associated with a decrease of general brain plasticity, i.e. that of the ability of the brain to form and maintain sensory-memory traces, a necessary basis for veridical perception and appropriate cognitive brain function. MMN/MMNm to change in stimulus duration is particularly affected by aging, suggesting the increased vulnerability of temporal processing to brain aging and accounting, for instance, for a large part of speech-perception difficulties of the aged beyond the age-related peripheral hearing loss.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Emily B. J. Coffey ◽  
Trent Nicol ◽  
Travis White-Schwoch ◽  
Bharath Chandrasekaran ◽  
Jennifer Krizman ◽  
...  

Abstract The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.


2014 ◽  
Vol 369 (1651) ◽  
pp. 20130297 ◽  
Author(s):  
Jeremy I. Skipper

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


2000 ◽  
Vol 23 (3) ◽  
pp. 332-333 ◽  
Author(s):  
Stephen Grossberg

The brain contains ubiquitous reciprocal bottom-up and top-down intercortical and thalamocortical pathways. These resonating feedback pathways may be essential for stable learning of speech and language codes and for context-sensitive selection and completion of noisy speech sounds and word groupings. Context-sensitive speech data, notably interword backward effects in time, have been quantitatively modeled using these concepts but not with purely feedforward models.


2020 ◽  
Vol 28 (8) ◽  
pp. 1273
Author(s):  
Yu CHEN ◽  
Licheng MO ◽  
Rong BI ◽  
Dandan ZHANG

2021 ◽  
pp. 1-62
Author(s):  
Orsolya B Kolozsvári ◽  
Weiyong Xu ◽  
Georgia Gerike ◽  
Tiina Parviainen ◽  
Lea Nieminen ◽  
...  

Speech perception is dynamic and shows changes across development. In parallel, functional differences in brain development over time have been well documented and these differences may interact with changes in speech perception during infancy and childhood. Further, there is evidence that the two hemispheres contribute unequally to speech segmentation at the sentence and phonemic levels. To disentangle those contributions, we studied the cortical tracking of various sized units of speech that are crucial for spoken language processing in children (4.7-9.3 year-olds, N=34) and adults (N=19). We measured participants’ magnetoencephalogram (MEG) responses to syllables, words and sentences, calculated the coherence between the speech signal and MEG responses at the level of words and sentences, and further examined auditory evoked responses to syllables. Age-related differences were found for coherence values at the delta and theta frequency bands. Both frequency bands showed an effect of stimulus type, although this was attributed to the length of the stimulus and not linguistic unit size. There was no difference between hemispheres at the source level either in coherence values for word or sentence processing or in evoked response to syllables. Results highlight the importance of the lower frequencies for speech tracking in the brain across different lexical units. Further, stimulus length affects the speech-brain associations suggesting methodological approaches should be selected carefully when studying speech envelope processing at the neural level. Speech tracking in the brain seems decoupled from more general maturation of the auditory cortex.


Sign in / Sign up

Export Citation Format

Share Document