scholarly journals Classifier Performance in Primary Somatosensory Cortex Towards Implementation of a Reinforcement Learning Based Brain Machine Interface

Author(s):  
David McNiel ◽  
Mohammad Bataineh ◽  
John Choi ◽  
John Hessburg ◽  
Joseph Francis
2019 ◽  
Author(s):  
A. Abbasi ◽  
L. Estebanez ◽  
D. Goueytes ◽  
H. Lassagne ◽  
D. E. Shulz ◽  
...  

SummaryNew and improved neuroprosthetics offer great hope for motor-impaired human patients to regain autonomy. One obstacle facing current technologies is that fine motor control requires near-instantaneous somatosensory feedback. The way forward is to artificially recreate the rich, distributed feedback generated by natural movements. Here, we hypothesize that incoming sensory feedback needs to follow biomimetic rules in order to be efficiently integrated by motor circuits. We have developed a rodent closed-loop brain-machine interface where head-fixed mice were trained to control a virtual cursor by modulating the activity of motor cortex neurons. Artificial feedback consisting of precise optogenetic stimulation patterns in the primary somatosensory cortex coupled to the motor cortical activity was provided online to the animal. We found that learning occurred only when the feedback had a topographically biomimetic structure. Shuffling the spatiotemporal organization of the feedback prevented learning the task. These results suggest that the patterns of inputs that are structured by the body map present in the primary somatosensory cortex of all mammals are essential for sensorimotor processing and constitute a backbone that needs to be considered when optimizing artificial sensory feedback for fine neuroprosthetic control.


2013 ◽  
Vol 461 ◽  
pp. 565-569 ◽  
Author(s):  
Fang Wang ◽  
Kai Xu ◽  
Qiao Sheng Zhang ◽  
Yi Wen Wang ◽  
Xiao Xiang Zheng

Brain-machine interfaces (BMIs) decode cortical neural spikes of paralyzed patients to control external devices for the purpose of movement restoration. Neuroplasticity induced by conducting a relatively complex task within multistep, is helpful to performance improvements of BMI system. Reinforcement learning (RL) allows the BMI system to interact with the environment to learn the task adaptively without a teacher signal, which is more appropriate to the case for paralyzed patients. In this work, we proposed to apply Q(λ)-learning to multistep goal-directed tasks using users neural activity. Neural data were recorded from M1 of a monkey manipulating a joystick in a center-out task. Compared with a supervised learning approach, significant BMI control was achieved with correct directional decoding in 84.2% and 81% of the trials from naïve states. The results demonstrate that the BMI system was able to complete a task by interacting with the environment, indicating that RL-based methods have the potential to develop more natural BMI systems.


2015 ◽  
Vol 35 (19) ◽  
pp. 7374-7387 ◽  
Author(s):  
B. T. Marsh ◽  
V. S. A. Tarigoppula ◽  
C. Chen ◽  
J. T. Francis

Author(s):  
Jack DiGiovanna ◽  
Babak Mahmoudi ◽  
Jeremiah Mitzelfelt ◽  
Justin C. Sanchez ◽  
Jose C. Principe

2009 ◽  
Vol 56 (1) ◽  
pp. 54-64 ◽  
Author(s):  
J. DiGiovanna ◽  
B. Mahmoudi ◽  
J. Fortes ◽  
J.C. Principe ◽  
J.C. Sanchez

2017 ◽  
Vol 28 (4) ◽  
pp. 873-886 ◽  
Author(s):  
Fang Wang ◽  
Yiwen Wang ◽  
Kai Xu ◽  
Hongbao Li ◽  
Yuxi Liao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document