Leader–Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader Using Reinforcement Learning

2018 ◽  
Vol 29 (6) ◽  
pp. 2139-2153 ◽  
Author(s):  
Yongliang Yang ◽  
Hamidreza Modares ◽  
Donald C. Wunsch ◽  
Yixin Yin
Automatica ◽  
2016 ◽  
Vol 71 ◽  
pp. 334-341 ◽  
Author(s):  
Hamidreza Modares ◽  
Subramanya P. Nageshrao ◽  
Gabriel A. Delgado Lopes ◽  
Robert Babuška ◽  
Frank L. Lewis

2004 ◽  
Vol 12 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Johan Parent ◽  
Katja Verbeeck ◽  
Jan Lemeire ◽  
Ann Nowe ◽  
Kris Steenhaut ◽  
...  

We report on the improvements that can be achieved by applying machine learning techniques, in particular reinforcement learning, for the dynamic load balancing of parallel applications. The applications being considered in this paper are coarse grain data intensive applications. Such applications put high pressure on the interconnect of the hardware. Synchronization and load balancing in complex, heterogeneous networks need fast, flexible, adaptive load balancing algorithms. Viewing a parallel application as a one-state coordination game in the framework of multi-agent reinforcement learning, and by using a recently introduced multi-agent exploration technique, we are able to improve upon the classic job farming approach. The improvements are achieved with limited computation and communication overhead.


Sign in / Sign up

Export Citation Format

Share Document