scholarly journals Tuning Neural Phase Entrainment to Speech

2017 ◽  
Vol 29 (8) ◽  
pp. 1378-1389 ◽  
Author(s):  
Simone Falk ◽  
Cosima Lanzilotti ◽  
Daniele Schön

Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

2002 ◽  
Vol 87 (6) ◽  
pp. 2817-2822 ◽  
Author(s):  
Christopher L. Douglas ◽  
Helen A. Baghdoyan ◽  
Ralph Lydic

Recent evidence suggests that muscarinic cholinergic receptors of the M2 subtype serve as autoreceptors modulating acetylcholine (ACh) release in prefrontal cortex. The potential contribution of M2 autoreceptors to excitability control of prefrontal cortex has not been investigated. The present study tested the hypothesis that M2 autoreceptors contribute to activation of the cortical electroencephalogram (EEG) in C57BL/6J (B6) mouse. This hypothesis was evaluated using microdialysis delivery of the muscarinic antagonist AF-DX116 (3 nM) while simultaneously quantifying ACh release in prefrontal cortex, number of 7- to 14-Hz EEG spindles, and EEG power spectral density. Mean ACh release in prefrontal cortex was significantly increased ( P < 0.0002) by AF-DX116. The number of 7- to 14-Hz EEG spindles caused by halothane anesthesia was significantly decreased ( P < 0.0001) by dialysis delivery of AF-DX116 to prefrontal cortex. The cholinergically induced cortical activation was characterized by a significant ( P < 0.05) decrease in slow-wave EEG power. Together, these neurochemical and EEG data support the conclusion that M2 autoreceptor enhancement of ACh release in prefrontal cortex activates EEG in contralateral prefrontal cortex of B6 mouse. EEG slow-wave activity varies across mouse strains, and the results encourage comparative phenotyping of cortical ACh release and EEG in additional mouse models.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Raphaël Thézé ◽  
Mehdi Ali Gadiri ◽  
Louis Albert ◽  
Antoine Provost ◽  
Anne-Lise Giraud ◽  
...  

Abstract Natural speech is processed in the brain as a mixture of auditory and visual features. An example of the importance of visual speech is the McGurk effect and related perceptual illusions that result from mismatching auditory and visual syllables. Although the McGurk effect has widely been applied to the exploration of audio-visual speech processing, it relies on isolated syllables, which severely limits the conclusions that can be drawn from the paradigm. In addition, the extreme variability and the quality of the stimuli usually employed prevents comparability across studies. To overcome these limitations, we present an innovative methodology using 3D virtual characters with realistic lip movements synchronized on computer-synthesized speech. We used commercially accessible and affordable tools to facilitate reproducibility and comparability, and the set-up was validated on 24 participants performing a perception task. Within complete and meaningful French sentences, we paired a labiodental fricative viseme (i.e. /v/) with a bilabial occlusive phoneme (i.e. /b/). This audiovisual mismatch is known to induce the illusion of hearing /v/ in a proportion of trials. We tested the rate of the illusion while varying the magnitude of background noise and audiovisual lag. Overall, the effect was observed in 40% of trials. The proportion rose to about 50% with added background noise and up to 66% when controlling for phonetic features. Our results conclusively demonstrate that computer-generated speech stimuli are judicious, and that they can supplement natural speech with higher control over stimulus timing and content.


Displays ◽  
2014 ◽  
Vol 35 (5) ◽  
pp. 266-272 ◽  
Author(s):  
Chunxiao Chen ◽  
Jing Wang ◽  
Kun Li ◽  
Qiuyi Wu ◽  
Haowen Wang ◽  
...  

2019 ◽  
Vol 130 (8) ◽  
pp. 1311-1319 ◽  
Author(s):  
Cyril Touchard ◽  
Jérôme Cartailler ◽  
Charlotte Levé ◽  
Pierre Parutto ◽  
Cédric Buxin ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Maya Inbar ◽  
Eitan Grossman ◽  
Ayelet N. Landau

Abstract Studies of speech processing investigate the relationship between temporal structure in speech stimuli and neural activity. Despite clear evidence that the brain tracks speech at low frequencies (~ 1 Hz), it is not well understood what linguistic information gives rise to this rhythm. In this study, we harness linguistic theory to draw attention to Intonation Units (IUs), a fundamental prosodic unit of human language, and characterize their temporal structure as captured in the speech envelope, an acoustic representation relevant to the neural processing of speech. IUs are defined by a specific pattern of syllable delivery, together with resets in pitch and articulatory force. Linguistic studies of spontaneous speech indicate that this prosodic segmentation paces new information in language use across diverse languages. Therefore, IUs provide a universal structural cue for the cognitive dynamics of speech production and comprehension. We study the relation between IUs and periodicities in the speech envelope, applying methods from investigations of neural synchronization. Our sample includes recordings from every-day speech contexts of over 100 speakers and six languages. We find that sequences of IUs form a consistent low-frequency rhythm and constitute a significant periodic cue within the speech envelope. Our findings allow to predict that IUs are utilized by the neural system when tracking speech. The methods we introduce here facilitate testing this prediction in the future (i.e., with physiological data).


2005 ◽  
Vol 116 (10) ◽  
pp. 2429-2440 ◽  
Author(s):  
Raffaele Ferri ◽  
Oliviero Bruni ◽  
Silvia Miano ◽  
Giuseppe Plazzi ◽  
Mario G. Terzano

Sign in / Sign up

Export Citation Format

Share Document