A Weight Transfer Mechanism for Kernel Reinforcement Learning Decoding in Brain-Machine Interfaces

Author(s):  
Xiang Zhang ◽  
Yiwen Wang
2017 ◽  
Vol 14 (3) ◽  
pp. 036016 ◽  
Author(s):  
Noeline W Prins ◽  
Justin C Sanchez ◽  
Abhishek Prasad

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5528
Author(s):  
Peng Zhang ◽  
Lianying Chao ◽  
Yuting Chen ◽  
Xuan Ma ◽  
Weihua Wang ◽  
...  

Background: For the nonstationarity of neural recordings in intracortical brain–machine interfaces, daily retraining in a supervised manner is always required to maintain the performance of the decoder. This problem can be improved by using a reinforcement learning (RL) based self-recalibrating decoder. However, quickly exploring new knowledge while maintaining a good performance remains a challenge in RL-based decoders. Methods: To solve this problem, we proposed an attention-gated RL-based algorithm combining transfer learning, mini-batch, and weight updating schemes to accelerate the weight updating and avoid over-fitting. The proposed algorithm was tested on intracortical neural data recorded from two monkeys to decode their reaching positions and grasping gestures. Results: The decoding results showed that our proposed algorithm achieved an approximate 20% increase in classification accuracy compared to that obtained by the non-retrained classifier and even achieved better classification accuracy than the daily retraining classifier. Moreover, compared with a conventional RL method, our algorithm improved the accuracy by approximately 10% and the online weight updating speed by approximately 70 times. Conclusions: This paper proposed a self-recalibrating decoder which achieved a good and robust decoding performance with fast weight updating and might facilitate its application in wearable device and clinical practice.


Sign in / Sign up

Export Citation Format

Share Document