scholarly journals Distinct contributions of low- and high-frequency neural oscillations to speech comprehension

2016 ◽  
Vol 32 (5) ◽  
pp. 536-544 ◽  
Author(s):  
Anne Kösem ◽  
Virginie van Wassenhove
2016 ◽  
Vol 116 (6) ◽  
pp. 2497-2512 ◽  
Author(s):  
Anne Kösem ◽  
Anahita Basirat ◽  
Leila Azizi ◽  
Virginie van Wassenhove

During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.


2013 ◽  
Vol 15 (3) ◽  
pp. 301-313 ◽  

Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.


Author(s):  
Javad Fakhri ◽  
Nematollah Rouhbakhsh ◽  
Reza Hoseinabadi ◽  
Farzaneh Fatahi ◽  
Mahsa Sepehernejad ◽  
...  

Introduction: The use of cochlear implants, due to technological limitations, causes problems in speech comprehension in the presence of noise. This study aimed to evaluate the speech-in- noise (SIN) comprehension with emphasis on high-frequency components between users of different bimodal adult. Materials and Methods: This study was conducted on 33 adult participants with a mean age of 36 years using bimodal (cochlear implant in one ear and hearing aid in another ear: CI/HA) style of different companies. Quick SIN with emphasis on high-frequency components was performed on the participants using an audiometer, an amplifier, and one speaker. Results: Comparing the average percentage of correct answers from the word recognition test in the presence of noise in bimodal users showed that the Cochlear brand provides a better signal-to-noise (SNR) compare to other brands. Our result shows that bimodal users of Advance bionic and Med-El groups have better performance in speech recognition than other brands. Conclusion: Bimodal users of Advance bionic and Med-El have better SNR loss than other brands. Besides, further studies on different ages can be helpful to make the right decision in this regard.


2018 ◽  
Vol 30 (5) ◽  
pp. 770-784 ◽  
Author(s):  
Torben Ott ◽  
Stephanie Westendorff ◽  
Andreas Nieder

Neural oscillations in distinct frequency bands in the prefrontal cortex (pFC) are associated with specialized roles during cognitive control. How dopamine modulates oscillations to structure pFC functions remains unknown. We trained macaques to switch between two numerical rules and recorded local field potentials from pFC while applying dopamine receptor targeting drugs using microiontophoresis. We show that the D1 and D2 family receptors (D1Rs and D2Rs, respectively) specifically altered internally generated prefrontal oscillations, whereas sensory-evoked potentials remained unchanged. Blocking D1Rs or stimulating D2Rs increased low-frequency theta and alpha oscillations known to be involved in learning and memory. In contrast, only D1R inhibition enhanced high-frequency beta oscillations, whereas only D2R stimulation increased gamma oscillations linked to top–down and bottom–up attentional processing. These findings suggest that dopamine alters neural oscillations relevant for executive functioning through dissociable actions at the receptor level.


2018 ◽  
Author(s):  
Sevada Hovsepyan ◽  
Itsaso Olasagasti ◽  
Anne-Lise Giraud

ABSTRACTSpeech comprehension requires segmenting continuous speech to connect it on-line with discrete linguistic neural representations. This process relies on theta-gamma oscillation coupling, which tracks syllables and encodes them in decipherable neural activity. Speech comprehension also strongly depends on contextual cues predicting speech structure and content. To explore the effects of theta-gamma coupling on bottom-up/top-down dynamics during on-line speech perception, we designed a generative model that can recognize syllable sequences in continuous speech. The model uses theta oscillations to detect syllable onsets and align both gamma-rate encoding activity with syllable boundaries and predictions with speech input. We observed that the model performed best when theta oscillations were used to align gamma units with input syllables, i.e. when bidirectional information flows were coordinated, and internal timing knowledge was exploited. This work demonstrates that notions of predictive coding and neural oscillations can usefully be brought together to account for dynamic on-line sensory processing.


Author(s):  
Greta Kaufeld ◽  
Hans Rutger Bosker ◽  
Phillip M. Alday ◽  
Antje S. Meyer ◽  
Andrea E. Martin

AbstractNeural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Zoefel & VanRullen, 2015). But, it is unclear what information (e.g., timing, rhythm, or content) the brain utilizes to generate linguistic structure and meaning beyond the information that is present in the physical stimulus. We used electroencephalography (EEG) to investigate whether oscillations are modulated by linguistic content over and above the speech stimulus’ rhythmicity and temporal distribution. We manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, and controlled for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers of all genders listened to naturally-spoken Dutch sentences, jabberwocky controls with a sentence-like prosodic rhythm and morphemes, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: Phase MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes the phase of neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically-inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously-generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.Significance StatementBiological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally-generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focusses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond acoustic and lexical controls.


Sign in / Sign up

Export Citation Format

Share Document