Amplified induced neural oscillatory activity predicts musicians’ benefits in categorical speech perception

Neuroscience ◽  
2017 ◽  
Vol 348 ◽  
pp. 107-113 ◽  
Author(s):  
Gavin M. Bidelman
2020 ◽  
Vol 32 (2) ◽  
pp. 226-240 ◽  
Author(s):  
Benedikt Zoefel ◽  
Isobella Allard ◽  
Megha Anil ◽  
Matthew H. Davis

Several recent studies have used transcranial alternating current stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simultaneously applied tACS. However, it is possible that tACS did not change actual speech perception but rather auditory stream segregation. In this study, we tested whether the phase relation between tACS and the rhythm of degraded words, presented in silence, modulates word report accuracy. We found strong evidence for a tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring electrodes (not for unilateral left hemisphere stimulation with square electrodes). These results were only obtained when data were analyzed using a statistical approach that was identified as optimal in a previous simulation study. The effect was driven by a phasic disruption of word report scores. Our results suggest a causal role of neural entrainment for speech perception and emphasize the importance of optimizing stimulation protocols and statistical approaches for brain stimulation research.


2020 ◽  
Author(s):  
Emmanuel Biau ◽  
Benjamin G. Schultz ◽  
Thomas C. Gunter ◽  
Sonja A. Kotz

ABSTRACTDuring multimodal speech perception, slow delta oscillations (~1 - 3 Hz) in the listener’s brain synchronize with speech signal, likely reflecting signal decomposition at the service of comprehension. In particular, fluctuations imposed onto the speech amplitude envelope by a speaker’s prosody seem to temporally align with articulatory and body gestures, thus providing two complementary sensations to the speech signal’s temporal structure. Further, endogenous delta oscillations in the left motor cortex align with speech and music beat, suggesting a role in the temporal integration of (quasi)-rhythmic stimulations. We propose that delta activity facilitates the temporal alignment of a listener’s oscillatory activity with the prosodic fluctuations in a speaker’s speech during multimodal speech perception. We recorded EEG responses in an audiovisual synchrony detection task while participants watched videos of a speaker. To test the temporal alignment of visual and auditory prosodic features, we filtered the speech signal to remove verbal content. Results confirm (i) that participants accurately detected audiovisual synchrony, and (ii) greater delta power in left frontal motor regions in response to audiovisual asynchrony. The latter effect correlated with behavioural performance, and (iii) decreased delta-beta coupling in the left frontal motor regions when listeners could not accurately integrate visual and auditory prosodies. Together, these findings suggest that endogenous delta oscillations align fluctuating prosodic information conveyed by distinct sensory modalities onto a common temporal organisation in multimodal speech perception.


PLoS Biology ◽  
2021 ◽  
Vol 19 (2) ◽  
pp. e3001142
Author(s):  
Sander van Bree ◽  
Ediz Sohoglu ◽  
Matthew H. Davis ◽  
Benedikt Zoefel

Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or “entrained”) to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research—including neural entrainment and tACS—and reveal endogenous neural oscillations as a key underlying principle for speech perception.


2018 ◽  
Vol 129 ◽  
pp. e145 ◽  
Author(s):  
Mario E. Archila-Meléndez ◽  
Vivianne H. Kranen-Mastenbroek ◽  
Giancarlo Valente ◽  
João Correia ◽  
Erik D. Gommer ◽  
...  

2020 ◽  
Author(s):  
Sander van Bree ◽  
Ediz Sohoglu ◽  
Matthew H Davis ◽  
Benedikt Zoefel

AbstractRhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes which continue after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, our results lay the foundation for a new account of speech perception which includes endogenous neural oscillations as a key underlying principle.


2019 ◽  
Author(s):  
Benedikt Zoefel ◽  
Isobella Allard ◽  
Megha Anil ◽  
Matthew H Davis

AbstractSeveral recent studies have used transcranial alternating stimulation (tACS) to demonstrate a causal role of neural oscillatory activity in speech processing. In particular, it has been shown that the ability to understand speech in a multi-speaker scenario or background noise depends on the timing of speech presentation relative to simultaneously applied tACS. However, it is possible that tACS did not change actual speech perception but rather auditory stream segregation. In this study, we tested whether the phase relation between tACS and the rhythm of degraded words, presented in silence, modulates word report accuracy. We found strong evidence for a tACS-induced modulation of speech perception, but only if the stimulation was applied bilaterally using ring electrodes (not for unilateral left hemisphere stimulation with square electrodes). These results were only obtained when data was analyzed using a statistical approach that was identified as optimal in a previous simulation study. The effect was driven by a phasic disruption of word report scores. Our results suggest a causal role of neural entrainment for speech perception and emphasize the importance of optimizing stimulation protocols and statistical approaches for brain stimulation research.


2020 ◽  
Vol 63 (4) ◽  
pp. 1270-1281
Author(s):  
Leah Fostick ◽  
Riki Taitelbaum-Swead ◽  
Shulamith Kreitler ◽  
Shelly Zokraut ◽  
Miriam Billig

Purpose Difficulty in understanding spoken speech is a common complaint among aging adults, even when hearing impairment is absent. Correlational studies point to a relationship between age, auditory temporal processing (ATP), and speech perception but cannot demonstrate causality unlike training studies. In the current study, we test (a) the causal relationship between a spatial–temporal ATP task (temporal order judgment [TOJ]) and speech perception among aging adults using a training design and (b) whether improvement in aging adult speech perception is accompanied by improved self-efficacy. Method Eighty-two participants aged 60–83 years were randomly assigned to a group receiving (a) ATP training (TOJ) over 14 days, (b) non-ATP training (intensity discrimination) over 14 days, or (c) no training. Results The data showed that TOJ training elicited improvement in all speech perception tests, which was accompanied by increased self-efficacy. Neither improvement in speech perception nor self-efficacy was evident following non-ATP training or no training. Conclusions There was no generalization of the improvement resulting from TOJ training to intensity discrimination or generalization of improvement resulting from intensity discrimination training to speech perception. These findings imply that the effect of TOJ training on speech perception is specific and such improvement is not simply the product of generally improved auditory perception. It provides support for the idea that temporal properties of speech are indeed crucial for speech perception. Clinically, the findings suggest that aging adults can be trained to improve their speech perception, specifically through computer-based auditory training, and this may improve perceived self-efficacy.


2020 ◽  
Vol 29 (2) ◽  
pp. 259-264 ◽  
Author(s):  
Hasan K. Saleh ◽  
Paula Folkeard ◽  
Ewan Macpherson ◽  
Susan Scollie

Purpose The original Connected Speech Test (CST; Cox et al., 1987) is a well-regarded and often utilized speech perception test. The aim of this study was to develop a new version of the CST using a neutral North American accent and to assess the use of this updated CST on participants with normal hearing. Method A female English speaker was recruited to read the original CST passages, which were recorded as the new CST stimuli. A study was designed to assess the newly recorded CST passages' equivalence and conduct normalization. The study included 19 Western University students (11 females and eight males) with normal hearing and with English as a first language. Results Raw scores for the 48 tested passages were converted to rationalized arcsine units, and average passage scores more than 1 rationalized arcsine unit standard deviation from the mean were excluded. The internal reliability of the 32 remaining passages was assessed, and the two-way random effects intraclass correlation was .944. Conclusion The aim of our study was to create new CST stimuli with a more general North American accent in order to minimize accent effects on the speech perception scores. The study resulted in 32 passages of equivalent difficulty for listeners with normal hearing.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


2020 ◽  
Vol 63 (2) ◽  
pp. 487-498
Author(s):  
Puisan Wong ◽  
Man Wai Cheng

Purpose Theoretical models and substantial research have proposed that general auditory sensitivity is a developmental foundation for speech perception and language acquisition. Nonetheless, controversies exist about the effectiveness of general auditory training in improving speech and language skills. This research investigated the relationships among general auditory sensitivity, phonemic speech perception, and word-level speech perception via the examination of pitch and lexical tone perception in children. Method Forty-eight typically developing 4- to 6-year-old Cantonese-speaking children were tested on the discrimination of the pitch patterns of lexical tones in synthetic stimuli, discrimination of naturally produced lexical tones, and identification of lexical tone in familiar words. Results The findings revealed that accurate lexical tone discrimination and identification did not necessarily entail the accurate discrimination of nonlinguistic stimuli that followed the pitch levels and pitch shapes of lexical tones. Although pitch discrimination and tone discrimination abilities were strongly correlated, accuracy in pitch discrimination was lower than that in tone discrimination, and nonspeech pitch discrimination ability did not precede linguistic tone discrimination in the developmental trajectory. Conclusions Contradicting the theoretical models, the findings of this study suggest that general auditory sensitivity and speech perception may not be causally or hierarchically related. The finding that accuracy in pitch discrimination is lower than that in tone discrimination suggests that comparable nonlinguistic auditory perceptual ability may not be necessary for accurate speech perception and language learning. The results cast doubt on the use of nonlinguistic auditory perceptual training to improve children's speech, language, and literacy abilities.


Sign in / Sign up

Export Citation Format

Share Document