scholarly journals Characterizing Cochlear Implant Artefact Removal from EEG Recordings Using a Real Human Model

MethodsX ◽  
2021 ◽  
pp. 101369
Author(s):  
Jaime A. Undurraga ◽  
Lindsey Van Yper ◽  
Manohar Bance ◽  
David McAlpine ◽  
Deborah Vickers
1993 ◽  
Vol 72 (7) ◽  
pp. 452-459 ◽  
Author(s):  
Charles M. Luetje ◽  
Sarah Jo Mediavilla ◽  
Lisa L. Geier

The precise electrophysiologic mechanism for sudden sensorineural auditory-vestibular loss has yet to be defined. No human models exist for this idiopathic phenomenon. A 67-year-old cochlear implant (CI) patient experienced what could be termed a “typical” acute sudden auditory-vestibular loss. Vestibular and CI electrical psychophysical changes were monitored over a 22-month period. Once the acute vestibular problems diminished, CI electrical parameters returned to near pre-episode levels. Some improvement occurred in rotational chair phase lag and asymmetry. While improving, platform posturography continued to show difficulty performing sensory organization tests V and VI. These clinical findings may imply that ganglion cell and neuronal population are responsible for the auditory findings in sudden auditory-vestibular loss. Secondly, a CI patient may serve as an ideal human model for further study of this phenomenon, should it occur.


2019 ◽  
Vol 9 (12) ◽  
pp. 347 ◽  
Author(s):  
Mihaly Benda ◽  
Ivan Volosyak

Brain–computer interfaces (BCIs) measure brain activity and translate it to control computer programs or external devices. However, the activity generated by the BCI makes measurements for objective fatigue evaluation very difficult, and the situation is further complicated due to different movement artefacts. The BCI performance could be increased if an online method existed to measure the fatigue objectively and accurately. While BCI-users are moving, a novel automatic online artefact removal technique is used to filter out these movement artefacts. The effects of this filter on BCI performance and mainly on peak frequency detection during BCI use were investigated in this paper. A successful peak alpha frequency measurement can lead to more accurately determining objective user fatigue. Fifteen subjects performed various imaginary and actual movements in separate tasks, while fourteen electroencephalography (EEG) electrodes were used. Afterwards, a steady-state visual evoked potential (SSVEP)-based BCI speller was used, and the users were instructed to perform various movements. An offline curve fitting method was used for alpha peak detection to assess the effect of the artefact filtering. Peak detection was improved by the filter, by finding 10.91% and 9.68% more alpha peaks during simple EEG recordings and BCI use, respectively. As expected, BCI performance deteriorated from movements, and also from artefact removal. Average information transfer rates (ITRs) were 20.27 bit/min, 16.96 bit/min, and 14.14 bit/min for the (1) movement-free, (2) the moving and unfiltered, and (3) the moving and filtered scenarios, respectively.


2018 ◽  
Author(s):  
Ben Somers ◽  
Eline Verschueren ◽  
Tom Francart

AbstractObjectiveWhen listening to speech, the brain tracks the speech envelope. It is possible to reconstruct this envelope from EEG recordings. However, in people who hear using a cochlear implant (CI), the artifacts caused by electrical stimulation of the auditory nerve contaminate the EEG. This causes the decoder to produce an artifact-dominated reconstruction, which does not reflect the neural signal processing. The objective of this study is to develop and validate a method for assessing the neural tracking of speech envelope in CI users.ApproachTo obtain EEG recordings free of stimulus artifacts, the electrical stimulation is periodically in-terrupted. During these stimulation gaps, artifact-free EEG can be sampled and used to train a linear envelope decoder. Different recording conditions were used to characterize the artifacts and their influence on the envelope reconstruction.Main resultsThe present study demonstrates for the first time that neural tracking of the speech envelope can be measured in response to ongoing electrical stimulation. The responses were validated to be truly neural and not affected by stimulus artifact.SignificanceBesides applications in audiology and neuroscience, the characterization and elimination of stimulus artifacts will enable future EEG studies involving continuous speech in CI users. Measures of neural tracking of the speech envelope reflect interesting properties of the listener’s perception of speech, such as speech intelligibility or attentional state. Successful decoding of neural envelope tracking will open new possibilities to investigate the neural mechanisms of speech perception with a CI.


2021 ◽  
Vol 11 (4) ◽  
pp. 691-705
Author(s):  
Andy J. Beynon ◽  
Bart M. Luijten ◽  
Emmanuel A. M. Mylanus

Electrically evoked auditory potentials have been used to predict auditory thresholds in patients with a cochlear implant (CI). However, with exception of electrically evoked compound action potentials (eCAP), conventional extracorporeal EEG recording devices are still needed. Until now, built-in (intracorporeal) back-telemetry options are limited to eCAPs. Intracorporeal recording of auditory responses beyond the cochlea is still lacking. This study describes the feasibility of obtaining longer latency cortical responses by concatenating interleaved short recording time windows used for eCAP recordings. Extracochlear reference electrodes were dedicated to record cortical responses, while intracochlear electrodes were used for stimulation, enabling intracorporeal telemetry (i.e., without an EEG device) to assess higher cortical processing in CI recipients. Simultaneous extra- and intra-corporeal recordings showed that it is feasible to obtain intracorporeal slow vertex potentials with a CI similar to those obtained by conventional extracorporeal EEG recordings. Our data demonstrate a proof of concept of closed-loop intracorporeal auditory cortical response telemetry (ICT) with a cochlear implant device. This research breaks new ground for next generation CI devices to assess higher cortical neural processing based on acute or continuous EEG telemetry to enable individualized automatic and/or adaptive CI fitting with only a CI.


2021 ◽  
Vol 15 ◽  
Author(s):  
Kelli McGuire ◽  
Gabrielle M. Firestone ◽  
Nanhua Zhang ◽  
Fawen Zhang

One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1′ latency, and P2′ latency did not differ across frequencies (p > 0.05). ACC N1′-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1′ latency across three base frequencies was negatively correlated with CNC word recognition (r = −0.40, p < 0.05) and CNC phoneme (r = −0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2′ latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1′-P2′ amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16–21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.


2020 ◽  
Vol 63 (12) ◽  
pp. 4325-4326 ◽  
Author(s):  
Hartmut Meister ◽  
Katrin Fuersen ◽  
Barbara Streicher ◽  
Ruth Lang-Roth ◽  
Martin Walger

Purpose The purpose of this letter is to compare results by Skuk et al. (2020) with Meister et al. (2016) and to point to a potential general influence of stimulus type. Conclusion Our conclusion is that presenting sentences may give cochlear implant recipients the opportunity to use timbre cues for voice perception. This might not be the case when presenting brief and sparse stimuli such as consonant–vowel–consonant or single words, which were applied in the majority of studies.


Author(s):  
Martin Chavant ◽  
Alexis Hervais-Adelman ◽  
Olivier Macherey

Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)–processed speech. The aim of this experiment was to provide a first insight into this question using acoustic simulations of CI processing. Method Sixty normal-hearing listeners took part in an auditory perceptual learning experiment. Each subject was randomly assigned to one of three groups of 20 referred to as NORMAL, LOWPASS, and NOTHING. The experiment consisted of two test phases separated by a training phase. In the test phases, all subjects were tested on recognition of monosyllabic words passed through a six-channel “PSHC” vocoder presented to a single ear. In the training phase, which consisted of listening to a 25-min audio book, all subjects were also presented with the same vocoded speech in one ear but the signal they received in their other ear differed across groups. The NORMAL group was presented with the unprocessed speech signal, the LOWPASS group with a low-pass filtered version of the speech signal, and the NOTHING group with no sound at all. Results The improvement in speech scores following training was significantly smaller for the NORMAL than for the LOWPASS and NOTHING groups. Conclusions This study suggests that the presentation of normal speech in the contralateral ear reduces or slows down perceptual learning of vocoded speech but that an unintelligible low-pass filtered contralateral signal does not have this effect. Potential implications for the rehabilitation of CI patients with partial or full contralateral hearing are discussed.


Sign in / Sign up

Export Citation Format

Share Document