MARVEL: Enabling controller load balancing in software-defined networks with multi-agent reinforcement learning

2020 ◽  
Vol 177 ◽  
pp. 107230
Author(s):  
Penghao Sun ◽  
Zehua Guo ◽  
Gang Wang ◽  
Julong Lan ◽  
Yuxiang Hu
1995 ◽  
Vol 2 ◽  
pp. 475-500 ◽  
Author(s):  
A. Schaerf ◽  
Y. Shoham ◽  
M. Tennenholtz

We study the process of multi-agent reinforcement learning in the context ofload balancing in a distributed system, without use of either centralcoordination or explicit communication. We first define a precise frameworkin which to study adaptive load balancing, important features of which are itsstochastic nature and the purely local information available to individualagents. Given this framework, we show illuminating results on the interplaybetween basic adaptive behavior parameters and their effect on systemefficiency. We then investigate the properties of adaptive load balancing inheterogeneous populations, and address the issue of exploration vs.exploitation in that context. Finally, we show that naive use ofcommunication may not improve, and might even harm system efficiency.


2004 ◽  
Vol 12 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Johan Parent ◽  
Katja Verbeeck ◽  
Jan Lemeire ◽  
Ann Nowe ◽  
Kris Steenhaut ◽  
...  

We report on the improvements that can be achieved by applying machine learning techniques, in particular reinforcement learning, for the dynamic load balancing of parallel applications. The applications being considered in this paper are coarse grain data intensive applications. Such applications put high pressure on the interconnect of the hardware. Synchronization and load balancing in complex, heterogeneous networks need fast, flexible, adaptive load balancing algorithms. Viewing a parallel application as a one-state coordination game in the framework of multi-agent reinforcement learning, and by using a recently introduced multi-agent exploration technique, we are able to improve upon the classic job farming approach. The improvements are achieved with limited computation and communication overhead.


Author(s):  
Hao Jiang ◽  
Dianxi Shi ◽  
Chao Xue ◽  
Yajie Wang ◽  
Gongju Wang ◽  
...  

Author(s):  
Xiaoyu Zhu ◽  
Yueyi Luo ◽  
Anfeng Liu ◽  
Md Zakirul Alam Bhuiyan ◽  
Shaobo Zhang

2021 ◽  
Vol 11 (11) ◽  
pp. 4948
Author(s):  
Lorenzo Canese ◽  
Gian Carlo Cardarilli ◽  
Luca Di Di Nunzio ◽  
Rocco Fazzolari ◽  
Daniele Giardino ◽  
...  

In this review, we present an analysis of the most used multi-agent reinforcement learning algorithms. Starting with the single-agent reinforcement learning algorithms, we focus on the most critical issues that must be taken into account in their extension to multi-agent scenarios. The analyzed algorithms were grouped according to their features. We present a detailed taxonomy of the main multi-agent approaches proposed in the literature, focusing on their related mathematical models. For each algorithm, we describe the possible application fields, while pointing out its pros and cons. The described multi-agent algorithms are compared in terms of the most important characteristics for multi-agent reinforcement learning applications—namely, nonstationarity, scalability, and observability. We also describe the most common benchmark environments used to evaluate the performances of the considered methods.


Sign in / Sign up

Export Citation Format

Share Document