scholarly journals Envelope reconstruction of speech and music highlights unique tracking of speech at low frequencies

2021 ◽  
Author(s):  
Nathaniel J Zuk ◽  
Jeremy W Murphy ◽  
Richard B Reilly ◽  
Edmund C Lalor

AbstractThe human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the processing of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, speech envelope tracking at low frequencies, below 1 Hz, was uniquely associated with increased weighting over parietal channels. Our results highlight the importance of low-frequency speech tracking and its origin from speech-specific processing in the brain.

2021 ◽  
Vol 17 (9) ◽  
pp. e1009358
Author(s):  
Nathaniel J. Zuk ◽  
Jeremy W. Murphy ◽  
Richard B. Reilly ◽  
Edmund C. Lalor

The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Maya Inbar ◽  
Eitan Grossman ◽  
Ayelet N. Landau

Abstract Studies of speech processing investigate the relationship between temporal structure in speech stimuli and neural activity. Despite clear evidence that the brain tracks speech at low frequencies (~ 1 Hz), it is not well understood what linguistic information gives rise to this rhythm. In this study, we harness linguistic theory to draw attention to Intonation Units (IUs), a fundamental prosodic unit of human language, and characterize their temporal structure as captured in the speech envelope, an acoustic representation relevant to the neural processing of speech. IUs are defined by a specific pattern of syllable delivery, together with resets in pitch and articulatory force. Linguistic studies of spontaneous speech indicate that this prosodic segmentation paces new information in language use across diverse languages. Therefore, IUs provide a universal structural cue for the cognitive dynamics of speech production and comprehension. We study the relation between IUs and periodicities in the speech envelope, applying methods from investigations of neural synchronization. Our sample includes recordings from every-day speech contexts of over 100 speakers and six languages. We find that sequences of IUs form a consistent low-frequency rhythm and constitute a significant periodic cue within the speech envelope. Our findings allow to predict that IUs are utilized by the neural system when tracking speech. The methods we introduce here facilitate testing this prediction in the future (i.e., with physiological data).


2016 ◽  
Author(s):  
K. Kessler ◽  
R. A. Seymour ◽  
G. Rippon

AbstractAlthough atypical social behaviour remains a key characterisation of ASD, the presence of sensory and perceptual abnormalities has been given a more central role in recent classification changes. An understanding of the origins of such aberrations could thus prove a fruitful focus for ASD research. Early neurocognitive models of ASD suggested that the study of high frequency activity in the brain as a measure of cortical connectivity might provide the key to understanding the neural correlates of sensory and perceptual deviations in ASD. As our review shows, the findings from subsequent research have been inconsistent, with a lack of agreement about the nature of any high frequency disturbances in ASD brains. Based on the application of new techniques using more sophisticated measures of brain synchronisation, direction of information flow, and invoking the coupling between high and low frequency bands, we propose a framework which could reconcile apparently conflicting findings in this area and would be consistent both with emerging neurocognitive models of autism and with the heterogeneity of the condition.HighlightsSensory and perceptual aberrations are becoming a core feature of the ASD symptom prolife.Brain oscillations and functional connectivity are consistently affected in ASD.Relationships (coupling) between high and low frequencies are also deficient.Novel framework proposes the ASD brain is marked by local dysregulation and reduced top-down connectivityThe ASD brain’s ability to predict stimuli and events in the environment may be affectedThis may underlie perceptual sensitives and cascade into social processing deficits in ASD


2021 ◽  
pp. 1-62
Author(s):  
Orsolya B Kolozsvári ◽  
Weiyong Xu ◽  
Georgia Gerike ◽  
Tiina Parviainen ◽  
Lea Nieminen ◽  
...  

Speech perception is dynamic and shows changes across development. In parallel, functional differences in brain development over time have been well documented and these differences may interact with changes in speech perception during infancy and childhood. Further, there is evidence that the two hemispheres contribute unequally to speech segmentation at the sentence and phonemic levels. To disentangle those contributions, we studied the cortical tracking of various sized units of speech that are crucial for spoken language processing in children (4.7-9.3 year-olds, N=34) and adults (N=19). We measured participants’ magnetoencephalogram (MEG) responses to syllables, words and sentences, calculated the coherence between the speech signal and MEG responses at the level of words and sentences, and further examined auditory evoked responses to syllables. Age-related differences were found for coherence values at the delta and theta frequency bands. Both frequency bands showed an effect of stimulus type, although this was attributed to the length of the stimulus and not linguistic unit size. There was no difference between hemispheres at the source level either in coherence values for word or sentence processing or in evoked response to syllables. Results highlight the importance of the lower frequencies for speech tracking in the brain across different lexical units. Further, stimulus length affects the speech-brain associations suggesting methodological approaches should be selected carefully when studying speech envelope processing at the neural level. Speech tracking in the brain seems decoupled from more general maturation of the auditory cortex.


2019 ◽  
Vol 8 (4) ◽  
pp. 10576-10579

Epilepsy is a chronic neurological disorder on brain which as indicated by World Health Organization influences around 59 million people worldwide. Epilepsy is often characterized by seizures. Despite that the patient treatment is successful in 70% of the cases; it is conceivable to bring up the adequacy up to 100% by utilizing the advancements in medical imaging and 3D image processing techniques that is been in this paper. Functional Magnetic Resonance Imaging (fMRI) is exploited to analyze the metabolic activities of the brain adding it with suitable image processing the seizure origin can be found and regressed out through appropriate surgery without compromising other regions of the brain. Low frequencies exist in the brain waves and this property is measured and analyzed and found to be in accordance with the functioning of the brain. Ignoring the common artifacts, the image is processed further and is correlation with the epileptic seizure is studied. The investigation revealed the exact location of the epileptic seizure which is complemented by the result obtained by functional connectivity


2019 ◽  
Author(s):  
Maya Inbar ◽  
Eitan Grossman ◽  
Ayelet N. Landau

AbstractStudies of speech processing investigate the relationship between temporal structure in speech stimuli and neural activity. Despite clear evidence that the brain tracks speech at low frequencies (~1 Hz), it is not well understood what linguistic information gives rise to this rhythm. Here, we harness linguistic theory to draw attention to Intonation Units (IUs), a fundamental prosodic unit of human language, and characterize their temporal structure as captured in the speech envelope, an acoustic representation relevant to the neural processing of speech.IUs are defined by a specific pattern of syllable delivery, together with resets in pitch and articulatory force. Linguistic studies of spontaneous speech indicate that this prosodic segmentation paces new information in language use across diverse languages. Therefore, IUs provide a universal structural cue for the cognitive dynamics of speech production and comprehension.We study the relation between IUs and periodicities in the speech envelope, applying methods from investigations of neural synchronization. Our sample includes recordings from every-day speech contexts of over 100 speakers and six languages. We find that sequences of IUs form a consistent low-frequency rhythm and constitute a significant periodic cue within the speech envelope. Our findings allow to predict that IUs are utilized by the neural system when tracking speech, and the methods we introduce facilitate testing this prediction given physiological data.


2018 ◽  
Author(s):  
Ben Somers ◽  
Eline Verschueren ◽  
Tom Francart

AbstractObjectiveWhen listening to speech, the brain tracks the speech envelope. It is possible to reconstruct this envelope from EEG recordings. However, in people who hear using a cochlear implant (CI), the artifacts caused by electrical stimulation of the auditory nerve contaminate the EEG. This causes the decoder to produce an artifact-dominated reconstruction, which does not reflect the neural signal processing. The objective of this study is to develop and validate a method for assessing the neural tracking of speech envelope in CI users.ApproachTo obtain EEG recordings free of stimulus artifacts, the electrical stimulation is periodically in-terrupted. During these stimulation gaps, artifact-free EEG can be sampled and used to train a linear envelope decoder. Different recording conditions were used to characterize the artifacts and their influence on the envelope reconstruction.Main resultsThe present study demonstrates for the first time that neural tracking of the speech envelope can be measured in response to ongoing electrical stimulation. The responses were validated to be truly neural and not affected by stimulus artifact.SignificanceBesides applications in audiology and neuroscience, the characterization and elimination of stimulus artifacts will enable future EEG studies involving continuous speech in CI users. Measures of neural tracking of the speech envelope reflect interesting properties of the listener’s perception of speech, such as speech intelligibility or attentional state. Successful decoding of neural envelope tracking will open new possibilities to investigate the neural mechanisms of speech perception with a CI.


GeroPsych ◽  
2012 ◽  
Vol 25 (4) ◽  
pp. 235-245 ◽  
Author(s):  
Katja Franke ◽  
Christian Gaser

We recently proposed a novel method that aggregates the multidimensional aging pattern across the brain to a single value. This method proved to provide stable and reliable estimates of brain aging – even across different scanners. While investigating longitudinal changes in BrainAGE in about 400 elderly subjects, we discovered that patients with Alzheimer’s disease and subjects who had converted to AD within 3 years showed accelerated brain atrophy by +6 years at baseline. An additional increase in BrainAGE accumulated to a score of about +9 years during follow-up. Accelerated brain aging was related to prospective cognitive decline and disease severity. In conclusion, the BrainAGE framework indicates discrepancies in brain aging and could thus serve as an indicator for cognitive functioning in the future.


Author(s):  
Yuliya S. Dzhos ◽  
◽  
Irina A. Men’shikova ◽  

This article presents the results of the study on spectral electroencephalogram (EEG) characteristics in 7–10-year-old children (8 girls and 22 boys) having difficulties with voluntary regulation of activity after 10 and 20 neurofeedback sessions using beta-activating training. Brain bioelectric activity was recorded in 16 standard leads using the Neuron-Spectrum-4/VPM complex. The dynamics was assessed by EEG beta and theta bands during neurofeedback. An increase in the total power of beta band oscillations was established both after 10 and after 20 sessions of EEG biofeedback in the frontal (p ≤ 0.001), left parietal (p ≤ 0.036), and temporal (p ≤ 0.003) areas of the brain. A decrease in the spectral characteristics of theta band oscillations was detected: after 10 neurofeedback sessions in the frontal (p ≤ 0.008) and temporal (p ≤ 0.006) areas of both hemispheres, as well as in the parietal area of the left hemisphere (p ≤ 0.005); after 20 sessions, in the central (p ≤ 0.004), frontal (p ≤ 0.001) and temporal (p ≤ 0.001) areas of both hemispheres, as well as in the occipital (p ≤ 0.047) and parietal (p ≤ 0.001) areas of the left hemisphere. The study into the dynamics of bioelectric activity during biofeedback using EEG parameters in 7–10-year-old children with impaired voluntary regulation of higher mental functions allowed us to prove the advisability of 20 sessions, as the increase in high-frequency activity and decrease in low-frequency activity do not stop with the 10th session. Changes in these parameters after 10 EEG biofeedback sessions are expressed mainly in the frontotemporal areas of both hemispheres, while after a course of 20 sessions, in both the frontotemporal and central parietal areas of the brain.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Jing Guang ◽  
Halen Baker ◽  
Orilia Ben-Yishay Nizri ◽  
Shimon Firman ◽  
Uri Werner-Reiss ◽  
...  

AbstractDeep brain stimulation (DBS) is currently a standard procedure for advanced Parkinson’s disease. Many centers employ awake physiological navigation and stimulation assessment to optimize DBS localization and outcome. To enable DBS under sedation, asleep DBS, we characterized the cortico-basal ganglia neuronal network of two nonhuman primates under propofol, ketamine, and interleaved propofol-ketamine (IPK) sedation. Further, we compared these sedation states in the healthy and Parkinsonian condition to those of healthy sleep. Ketamine increases high-frequency power and synchronization while propofol increases low-frequency power and synchronization in polysomnography and neuronal activity recordings. Thus, ketamine does not mask the low-frequency oscillations used for physiological navigation toward the basal ganglia DBS targets. The brain spectral state under ketamine and propofol mimicked rapid eye movement (REM) and Non-REM (NREM) sleep activity, respectively, and the IPK protocol resembles the NREM-REM sleep cycle. These promising results are a meaningful step toward asleep DBS with nondistorted physiological navigation.


Sign in / Sign up

Export Citation Format

Share Document