brain machine interfaces
Recently Published Documents


TOTAL DOCUMENTS

501
(FIVE YEARS 112)

H-INDEX

41
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Arunabha Mohan Roy

Electroencephalogram (EEG) based motor imagery (MI) classification is an important aspect in brain-machine interfaces (BMIs) which bridges between neural system and computer devices decoding brain signals into recognizable machine commands. However, the MI classification task is challenging due to inherent complex properties, inter-subject variability, and low signal-to-noise ratio (SNR) of EEG signals. To overcome the above-mentioned issues, the current work proposes an efficient multi-scale convolutional neural network (MS-CNN) which can extract the distinguishable features of several non-overlapping canonical frequency bands of EEG signals from multiple scales for MI-BCI classification. In the framework, discriminant user-specific features have been extracted and integrated to improve the accuracy and performance of the CNN classifier. Additionally, different data augmentation methods have been implemented to further improve the accuracy and robustness of the model. The model achieves an average classification accuracy of 93.74% and Cohen's kappa-coefficient of 0.92 on the BCI competition IV2b dataset outperforming several baseline and current state-of-the-art EEG-based MI classification models. The proposed algorithm effectively addresses the shortcoming of existing CNN-based EEG-MI classification models and significantly improves the classification accuracy. The current framework can provide a stimulus for designing efficient and robust real-time human-robot interaction.


2022 ◽  
Vol 73 (1) ◽  
pp. 131-158
Author(s):  
Richard A. Andersen ◽  
Tyson Aflalo ◽  
Luke Bashford ◽  
David Bjånes ◽  
Spencer Kellis

Traditional brain–machine interfaces decode cortical motor commands to control external devices. These commands are the product of higher-level cognitive processes, occurring across a network of brain areas, that integrate sensory information, plan upcoming motor actions, and monitor ongoing movements. We review cognitive signals recently discovered in the human posterior parietal cortex during neuroprosthetic clinical trials. These signals are consistent with small regions of cortex having a diverse role in cognitive aspects of movement control and body monitoring, including sensorimotor integration, planning, trajectory representation, somatosensation, action semantics, learning, and decision making. These variables are encoded within the same population of cells using structured representations that bind related sensory and motor variables, an architecture termed partially mixed selectivity. Diverse cognitive signals provide complementary information to traditional motor commands to enable more natural and intuitive control of external devices.


Author(s):  
Shaikh Faisal ◽  
Mojtaba Amjadipour ◽  
Kimi Izzo ◽  
James Singer ◽  
Avi Bendavid ◽  
...  

Abstract Brain-machine interfaces are key components for the development of hands-free, brain -controlled devices. Electroencephalogram (EEG) electrodes are particularly attractive for harvesting the neural signals in a non-invasive fashion. Here, we explore the use of epitaxial graphene grown on silicon carbide on silicon for detecting the electroencephalogram signals with high sensitivity. This dry and non-invasive approach exhibits a markedly improved skin contact impedance when benchmarked to commercial dry electrodes, as well as superior robustness, allowing prolonged and repeated use also in a highly saline environment. In addition, we report the newly -observed phenomenon of surface conditioning of the epitaxial graphene electrodes. The prolonged contact of the epitaxial graphene with the skin electrolytes functionalize the grain boundaries of the graphene, leading to the formation of a thin surface film of water through physisorption and consequently reducing its contact impedance by more than 75%. This effect is primed in highly saline environments, and could be also further tailored as pre-conditioning to enhance the performance and reliability of the epitaxial graphene sensors.


2021 ◽  
Author(s):  
Sara Cadoni ◽  
Charlie Demene ◽  
Matthieu Provansal ◽  
Diep Nguyen ◽  
Dasha Nelidova ◽  
...  

Remote, precisely controlled activation of the brain is a fundamental challenge in the development of brain machine interfaces providing feasible rehabilitation strategies for neurological disorders. Low-frequency ultrasound stimulation can be used to modulate neuronal activity deep in the brain, but this approach lacks spatial resolution and cellular selectivity and loads the brain with high levels of acoustic energy. The combination of the expression of ultrasound-sensitive proteins with ultrasound stimulation (sonogenetic stimulation) can provide cellular selectivity and higher sensitivity, but such strategies have been subject to severe limitations in terms of spatiotemporal resolution in vivo, precluding their use for real-life applications. We used the expression of large-conductance mechanosensitive ion channels (MscL) with high-frequency ultrasonic stimulation for a duration of milliseconds to activate neurons selectively at a relatively high spatiotemporal resolution in the rat retina ex vivo and the primary visual cortex of rodents in vivo. This spatiotemporal resolution was achieved at low energy levels associated with negligible tissue heating and far below those leading to complications in ultrasound neuromodulation. We showed, in an associative learning test, that sonogenetic stimulation of the visual cortex generated light perception. Our findings demonstrate that sonogenetic stimulation is compatible with millisecond pattern presentation for visual restoration at the cortical level. They represent a step towards the precise transfer of information over large distances to the cortical and subcortical regions of the brain via an approach less invasive than that associated with current brain machine interfaces and with a wide range of applications in neurological disorders.


Author(s):  
Chris Willmott

Transhumanism looks to utilise science and technology to move humans beyond the limitations of their natural form. Recent scientific advances have, for the first time, presented plausible genetic interventions for the directed evolution of humans. In separate developments, electromechanical innovations, including miniaturisation of components and improvements in bio-compatible materials, have seen breakthroughs in brain-machine interfaces (BMIs) that potentiate a cybernetic dimension, in which mechanical devices would be under the direct control of the mind. This article offers insight into the most important of these recent advances, with particular emphasis on genome editing and therapeutic uses of BMIs in which the same technology might be employed for enhancement.


2021 ◽  
Author(s):  
Hung-Yun Lu ◽  
Anil Bollimunta ◽  
Ryan W. Eaton ◽  
John H. Morrison ◽  
Karen A. Moxon ◽  
...  

Author(s):  
Ariel Tankus ◽  
Lior Solomom ◽  
Yotam Aharony ◽  
Achinoam Faust-Socher ◽  
Iso Strauss

Abstract Objective. The goal of this study is to decode the electrical activity of single neurons in the human subthalamic nucleus (STN) to infer the speech features that a person articulated, heard or imagined. We also aim to evaluate the amount of subthalamic neurons required for high accuracy decoding suitable for real-life speech brain-machine interfaces. Approach. We intraoperatively recorded single-neuron activity in the STN of 21 neurosurgical patients with Parkinson's disease undergoing implantation of deep brain stimulator (DBS) while patients produced, perceived or imagined the five monophthongal vowel sounds. Our decoder is based on machine learning algorithms that dynamically learn specific features of the speech-related firing patterns. Main results. In an extensive comparison of algorithms, our sparse decoder ("SpaDe"), based on sparse decomposition of the high dimensional neuronal feature space, outperformed the other algorithms in all three conditions: production, perception and imagery. For speech production, our algorithm, Spade, predicted all vowels correctly (accuracy: 100%; chance level: 20%). For perception accuracy was 96%, and for imagery: 88%. The accuracy of Spade showed a linear behavior in the amount of neurons for the perception data, and even faster for production or imagery. Significance. Our study demonstrates that the information encoded by single neurons in the STN about the production, perception and imagery of speech is suitable for high-accuracy decoding. It is therefore an important step towards brain-machine interfaces for restoration of speech faculties that bears an enormous potential to alleviate the suffering of completely paralyzed ("locked-in") patients and allow them to communicate again with their environment. Moreover, our research indicates how many subthalamic neurons may be necessary to achieve each level of decoding accuracy, which is of supreme importance for a neurosurgeon planning the implantation of a speech brain-machine interface.


2021 ◽  
pp. 545-549
Author(s):  
C. Gaillard ◽  
C. De Sousa ◽  
J. Amengual ◽  
S. Ben Hamed

Sign in / Sign up

Export Citation Format

Share Document