scholarly journals Neural tracking of the speech envelope in cochlear implant users

2018 ◽  
Author(s):  
Ben Somers ◽  
Eline Verschueren ◽  
Tom Francart

AbstractObjectiveWhen listening to speech, the brain tracks the speech envelope. It is possible to reconstruct this envelope from EEG recordings. However, in people who hear using a cochlear implant (CI), the artifacts caused by electrical stimulation of the auditory nerve contaminate the EEG. This causes the decoder to produce an artifact-dominated reconstruction, which does not reflect the neural signal processing. The objective of this study is to develop and validate a method for assessing the neural tracking of speech envelope in CI users.ApproachTo obtain EEG recordings free of stimulus artifacts, the electrical stimulation is periodically in-terrupted. During these stimulation gaps, artifact-free EEG can be sampled and used to train a linear envelope decoder. Different recording conditions were used to characterize the artifacts and their influence on the envelope reconstruction.Main resultsThe present study demonstrates for the first time that neural tracking of the speech envelope can be measured in response to ongoing electrical stimulation. The responses were validated to be truly neural and not affected by stimulus artifact.SignificanceBesides applications in audiology and neuroscience, the characterization and elimination of stimulus artifacts will enable future EEG studies involving continuous speech in CI users. Measures of neural tracking of the speech envelope reflect interesting properties of the listener’s perception of speech, such as speech intelligibility or attentional state. Successful decoding of neural envelope tracking will open new possibilities to investigate the neural mechanisms of speech perception with a CI.

2021 ◽  
Author(s):  
Nathaniel J Zuk ◽  
Jeremy W Murphy ◽  
Richard B Reilly ◽  
Edmund C Lalor

AbstractThe human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the processing of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, speech envelope tracking at low frequencies, below 1 Hz, was uniquely associated with increased weighting over parietal channels. Our results highlight the importance of low-frequency speech tracking and its origin from speech-specific processing in the brain.


2021 ◽  
Author(s):  
Waldo Nogueira ◽  
Hanna Dolhopiatenko

Objectives: Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. Approach: CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoders using regularized least squares. The correlation coefficient between the reconstructed and the attended (ρ_(A_SIR )) and between the reconstructed and the unattended (ρ_(U_SIR )) speech stream at each SIR was computed. Main Results: Selective attention decoding in CI users is possible even if both speech streams are presented monaurally. A significant effect of SIR on the correlation coefficient to the attended signal ρ_(A_SIR ), as well as on the difference correlation coefficients ρ_(A_SIR )-ρ_(U_SIR ) and ρ_(A_SIR )-ρ_(U_(-SIR) ) was observed, but not on the unattended correlation coefficient ρ_(U_SIR ). Finally, the results show a significant correlation between speech understanding performance and the correlation coefficients ρ_(A_SIR-) ρ_(U_SIR ) or -ρ_(U_SIR ) across subjects. Moreover, the difference correlation coefficient ρ_(A_SIR )-ρ_(U_(-SIR) ), which is less affected by the CI electrical artifact, presented a correlation trend with speech understanding performance. Significance: Selective attention decoding in CI users is possible, however care needs to be taken with the CI artifact and the speech material used to train the decoders. Even if only a small correlation trend between selective attention decoding and speech understanding was observed, these results are important for future development of objective speech understanding measures for CI users.


2020 ◽  
Author(s):  
Tom Gajęcki ◽  
Waldo Nogueira

Normal hearing listeners have the ability to exploit the audio input perceived by each ear to extract target information in challenging listening scenarios. Bilateral cochlear implant (BiCI) users, however, do not benefit as much as normal hearing listeners do from a bilateral input. In this study, we investigate the effect that bilaterally linked band selection, bilaterally synchronized electrical stimulation and ideal binary masks (IdBMs) have on the ability of 10 BiCIs to understand speech in background noise. The performance was assessed through a sentence-based speech intelligibility test, in a scenario where the speech signal was presented from the front and the interfering noise from one side. The linked band selection relies on the most favorable signal-to-noise-ratio (SNR) ear, which will select the bands to be stimulated for both CIs. Results show that no benefit from adding a second CI to the most favorable SNR side was achieved for any of the tested bilateral conditions. However, when using both devices, speech perception results show that performing linked band selection, besides delivering bilaterally synchronized electrical stimulation, leads to an improvement compared to standard clinical setups. Moreover, the outcomes of this work show that by applying IdBMs, subjects achieve speech intelligibility scores similar to the ones without background noise.


2018 ◽  
Author(s):  
Eline Verschueren ◽  
Jonas Vanthornhout ◽  
Tom Francart

ABSTRACTObjectivesRecently an objective measure of speech intelligibility, based on brain responses derived from the electroencephalogram (EEG), has been developed using isolated Matrix sentences as a stimulus. We investigated whether this objective measure of speech intelligibility can also be used with natural speech as a stimulus, as this would be beneficial for clinical applications.DesignWe recorded the EEG in 19 normal-hearing participants while they listened to two types of stimuli: Matrix sentences and a natural story. Each stimulus was presented at different levels of speech intelligibility by adding speech weighted noise. Speech intelligibility was assessed in two ways for both stimuli: (1) behaviorally and (2) objectively by reconstructing the speech envelope from the EEG using a linear decoder and correlating it with the acoustic envelope. We also calculated temporal response functions (TRFs) to investigate the temporal characteristics of the brain responses in the EEG channels covering different brain areas.ResultsFor both stimulus types the correlation between the speech envelope and the reconstructed envelope increased with increasing speech intelligibility. In addition, correlations were higher for the natural story than for the Matrix sentences. Similar to the linear decoder analysis, TRF amplitudes increased with increasing speech intelligibility for both stimuli. Remarkable is that although speech intelligibility remained unchanged in the no noise and +2.5 dB SNR condition, neural speech processing was affected by the addition of this small amount of noise: TRF amplitudes across the entire scalp decreased between 0 to 150 ms, while amplitudes between 150 to 200 ms increased in the presence of noise. TRF latency changes in function of speech intelligibility appeared to be stimulus specific: The latency of the prominent negative peak in the early responses (50-300 ms) increased with increasing speech intelligibility for the Matrix sentences, but remained unchanged for the natural story.ConclusionsThese results show (1) the feasibility of natural speech as a stimulus for the objective measure of speech intelligibility, (2) that neural tracking of speech is enhanced using a natural story compared to Matrix sentences and (3) that noise and the stimulus type can change the temporal characteristics of the brain responses. These results might reflect the integration of incoming acoustic features and top-down information, suggesting that the choice of the stimulus has to be considered based on the intended purpose of the measurement.


2016 ◽  
Vol 127 (2) ◽  
pp. 1752-1754 ◽  
Author(s):  
Marion Vincent ◽  
Olivier Rossel ◽  
Bénédicte Poulin-Charronnat ◽  
Guillaume Herbet ◽  
Mitsuhiro Hayashibe ◽  
...  

2019 ◽  
Vol 29 (02) ◽  
pp. 1850038 ◽  
Author(s):  
Arturo Martínez-Rodrigo ◽  
Beatriz García-Martínez ◽  
Raúl Alcaraz ◽  
Pascual González ◽  
Antonio Fernández-Caballero

Automatic identification of negative stress is an unresolved challenge that has received great attention in the last few years. Many studies have analyzed electroencephalographic (EEG) recordings to gain new insights about how the brain reacts to both short- and long-term stressful stimuli. Although most of them have only considered linear methods, the heterogeneity and complexity of the brain has recently motivated an increasing use of nonlinear metrics. Nonetheless, brain dynamics reflected in EEG recordings often exhibit a multiscale nature and no study dealing with this aspect has been developed yet. Hence, in this work two nonlinear indices for quantifying regularity and predictability of time series from several time scales are studied for the first time to discern between visually elicited emotional states of calmness and negative stress. The obtained results have revealed the maximum discriminant ability of 86.35% for the second time scale, thus suggesting that brain dynamics triggered by negative stress can be more clearly assessed after removal of some fast temporal oscillations. Moreover, both metrics have also been able to report complementary information for some brain areas.


e-Neuroforum ◽  
2015 ◽  
Vol 21 (1) ◽  
Author(s):  
A. Kral ◽  
Thomas Lenarz

AbstractFor the first time in the history of neuroscience, hearing allows to systematically investigate brain development with and without sensory experience in humans. This is given by the clinical success of the cochlear implant, a neuroprosthesis that can replace the non-functional inner ear. In recent years, auditory neuroscience investigated the neuronal mechanisms of learning, sensitive developmental periods and cross-modal reorganization in parallel in humans and animal models, with highly consistent outcomes. We learned that the brain undergoes a complex adaptation to deafness, both within and outside the auditory system. These adaptations reorganize the brain optimally to cope with deafness, but they negatively interfere with a later prosthetic therapy of hearing. They eventually close the sensitive developmental periods. The critical nature of sensitive periods is not only a consequence of a developmentally reduced synaptic plasticity but also the consequence of changes in central integrative functions and cognitive adaptations to deafness.


2018 ◽  
Vol 11 (3) ◽  
pp. 306-316 ◽  
Author(s):  
Fernando Del Mando Lucchesi ◽  
Ana Claudia Moreira Almeida-Verdu ◽  
Deisy das Graças de Souza

Sign in / Sign up

Export Citation Format

Share Document