scholarly journals Estimation and control using sampling-based Bayesian reinforcement learning

2020 ◽  
Vol 5 (1) ◽  
pp. 127-135
Author(s):  
Patrick Slade ◽  
Zachary N. Sunberg ◽  
Mykel J. Kochenderfer
2009 ◽  
Vol 129 (4) ◽  
pp. 363-367
Author(s):  
Tomoyuki Maeda ◽  
Makishi Nakayama ◽  
Hiroshi Narazaki ◽  
Akira Kitamura

Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Sign in / Sign up

Export Citation Format

Share Document