scholarly journals CLoSES: A platform for closed-loop intracranial stimulation in humans

Author(s):  
Rina Zelmann ◽  
Angelique C. Paulk ◽  
Ishita Basu ◽  
Anish Sarma ◽  
Ali Yousefi ◽  
...  

AbstractTargeted interrogation of brain networks through invasive brain stimulation has become an increasingly important research tool as well as a therapeutic modality. The majority of work with this emerging capability has been focused on open-loop approaches. Closed-loop techniques, however, could improve neuromodulatory therapies and research investigations by optimizing stimulation approaches using neurally informed, personalized targets. Specifically, closed-loop direct electrical stimulation tests in humans performed during semi-chronic electrode implantation in patients with refractory epilepsy could help deepen our understanding of basic research questions as well as the mechanisms and treatment solutions for many neuropsychiatric diseases.However, implementing closed-loop systems is challenging. In particular, during intracranial epilepsy monitoring, electrodes are implanted exclusively for clinical reasons. Thus, detection and stimulation sites must be participant- and task-specific. In addition, the system must run in parallel with clinical systems, integrate seamlessly with existing setups, and ensure safety features. A robust, yet flexible platform is required to perform different tests in a single participant and to comply with clinical settings.In order to investigate closed-loop stimulation for research and therapeutic use, we developed a Closed-Loop System for Electrical Stimulation (CLoSES) that computes neural features which are then used in a decision algorithm to trigger stimulation in near real-time. To summarize CLoSES, intracranial EEG signals are acquired, band-pass filtered, and local and network features are continuously computed. If target features are detected (e.g. above a preset threshold for certain duration), stimulation is triggered. An added benefit is the flexibility of CLoSES. Not only could the system trigger stimulation while detecting real-time neural features, but we incorporated a pipeline wherein we used an encoder/decoder model to estimate a hidden cognitive state from the neural features. Other features include randomly timed stimulation, which percentage of biomarker detections produce stimulation, and safety refractory periods.CLoSES has been successfully used in twelve patients with implanted depth electrodes in the epilepsy monitoring unit during cognitive tasks, spindle detection during sleep, and epileptic activity detection. CLoSES provides a flexible platform to implement a variety of closed-loop experimental paradigms in humans. We anticipate that probing neural dynamics and interaction between brain states and stimulation responses with CLoSES will lead to novel insights into the mechanism of normal and pathological brain activity, the discovery and evaluation of potential electrographic biomarkers of neurological and psychiatric disorders, and the development and testing of patient-specific stimulation targets and control signals before implanting a therapeutic device.

2019 ◽  
Author(s):  
Greta Tuckute ◽  
Sofie Therese Hansen ◽  
Troels Wesenberg Kjaer ◽  
Lars Kai Hansen

AbstractNeurofeedback based on real-time brain imaging allows for targeted training of brain activity with demonstrated clinical applications. A rapid technical development of electroen-cephalography (EEG)-based systems and an increasing interest in cognitive training has lead to a call for accessible and adaptable software frameworks. Here, we present and outline the core components of a novel open-source neurofeedback framework based on scalp EEG signals for real-time neuroimaging, state classification and closed-loop feedback.The software framework includes real-time signal preprocessing, adaptive artifact rejection, and cognitive state classification from scalp EEG. The framework is implemented using exclusively Python source code to allow for diverse functionality, high modularity, and easy extendibility of software development depending on the experimenter’s needs.As a proof of concept, we demonstrate the functionality of our new software framework by implementing an attention training paradigm using a consumer-grade, dry-electrode EEG system. Twenty-two participants were trained on a single neurofeedback session with behavioral pre- and post-training sessions within three consecutive days. We demonstrate a mean decoding error rate of 34.3% (chance=50%) of subjective attentional states. Hence, cognitive states were decoded in real-time by continuously updating classification models on recently recorded EEG data without the need for any EEG recordings prior to the neurofeedback session.The proposed software framework allows a wide range of users to actively engage in the development of novel neurofeedback tools to accelerate improvements in neurofeedback as a translational and therapeutic tool.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Miguel Angrick ◽  
Maarten C. Ottenhoff ◽  
Lorenz Diener ◽  
Darius Ivucic ◽  
Gabriel Ivucic ◽  
...  

AbstractSpeech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Michael L. Martini ◽  
Aly A. Valliani ◽  
Claire Sun ◽  
Anthony B. Costa ◽  
Shan Zhao ◽  
...  

AbstractReal-time seizure detection is a resource intensive process as it requires continuous monitoring of patients on stereoelectroencephalography. This study improves real-time seizure detection in drug resistant epilepsy (DRE) patients by developing patient-specific deep learning models that utilize a novel self-supervised dynamic thresholding approach. Deep neural networks were constructed on over 2000 h of high-resolution, multichannel SEEG and video recordings from 14 DRE patients. Consensus labels from a panel of epileptologists were used to evaluate model efficacy. Self-supervised dynamic thresholding exhibited improvements in positive predictive value (PPV; difference: 39.0%; 95% CI 4.5–73.5%; Wilcoxon–Mann–Whitney test; N = 14; p = 0.03) with similar sensitivity (difference: 14.3%; 95% CI − 21.7 to 50.3%; Wilcoxon–Mann–Whitney test; N = 14; p = 0.42) compared to static thresholds. In some models, training on as little as 10 min of SEEG data yielded robust detection. Cross-testing experiments reduced PPV (difference: 56.5%; 95% CI 25.8–87.3%; Wilcoxon–Mann–Whitney test; N = 14; p = 0.002), while multimodal detection significantly improved sensitivity (difference: 25.0%; 95% CI 0.2–49.9%; Wilcoxon–Mann–Whitney test; N = 14; p < 0.05). Self-supervised dynamic thresholding improved the efficacy of real-time seizure predictions. Multimodal models demonstrated potential to improve detection. These findings are promising for future deployment in epilepsy monitoring units to enable real-time seizure detection without annotated data and only minimal training time in individual patients.


2005 ◽  
Author(s):  
Harry Funk ◽  
Robert Goldman ◽  
Christopher Miller ◽  
John Meisner ◽  
Peggy Wu

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5209 ◽  
Author(s):  
Andrea Gonzalez-Rodriguez ◽  
Jose L. Ramon ◽  
Vicente Morell ◽  
Gabriel J. Garcia ◽  
Jorge Pomares ◽  
...  

The main goal of this study is to evaluate how to optimally select the best vibrotactile pattern to be used in a closed loop control of upper limb myoelectric prostheses as a feedback of the exerted force. To that end, we assessed both the selection of actuation patterns and the effects of the selection of frequency and amplitude parameters to discriminate between different feedback levels. A single vibrotactile actuator has been used to deliver the vibrations to subjects participating in the experiments. The results show no difference between pattern shapes in terms of feedback perception. Similarly, changes in amplitude level do not reflect significant improvement compared to changes in frequency. However, decreasing the number of feedback levels increases the accuracy of feedback perception and subject-specific variations are high for particular participants, showing that a fine-tuning of the parameters is necessary in a real-time application to upper limb prosthetics. In future works, the effects of training, location, and number of actuators will be assessed. This optimized selection will be tested in a real-time proportional myocontrol of a prosthetic hand.


Author(s):  
Fei Zheng ◽  
WenFeng Lu ◽  
Yoke San Wong ◽  
Kelvin Weng Chiong Foong

Dental bone drilling is an inexact and often a blind art. Dentist risks damaging the invisible tooth roots, nerves and critical dental structures like mandibular canal and maxillary sinus. This paper presents a haptics-based jawbone drilling simulator for novice surgeons. Through the real-time training of tactile sensations based on patient-specific data, improved outcomes and faster procedures can be provided. Previously developed drilling simulators usually adopt penalty-based contact force models and often consider only spherical-shaped drill bits for simplicity and computational efficiency. In contrast, our simulator is equipped with a more precise force model, adapted from the Voxmap-PointShell (VPS) method to capture the essential features of the drilling procedure. In addition, the proposed force model can accommodate various shapes of drill bits. To achieve better anatomical accuracy, our oral model has been reconstructed from Cone Beam CT, using voxel-based method. To enhance the real-time response, the parallel computing power of Graphics Processing Units is exploited through extra efforts for data structure design, algorithms parallelization, and graphic memory utilization. Preliminary results show that the developed system can produce appropriate force feedback at different tissue layers.


Sign in / Sign up

Export Citation Format

Share Document