Deep Reinforcement Learning Based Active Queue Management for IoT Networks

2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Minsu Kim ◽  
Muhammad Jaseemuddin ◽  
Alagan Anpalagan
2021 ◽  
Author(s):  
Minsu Kim

Internet of Things (IoT) has pervaded most aspects of our life through the Fourth Industrial Revolution. It is expected that a typical family home could contain several hundreds of smart devices by 2022. Current network architecture has been moving to fog/edge architecture to have the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by those devices and reduce queuing delay, novel self-learning network management algorithms are required on fog/edge nodes. For efficient network management, Active Queue Management (AQM) has been proposed which is the intelligent queuing discipline. In this paper, we propose a new AQM based on Deep Reinforcement Learning (DRL) to handle the latency as well as the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline of our scheme, and compare our approach with various AQM schemes by deploying them on the interface of fog/edge node in IoT infrastructure. We simulate the AQM schemes on the different bandwidth and round trip time (RTT) settings, and in the empirical results, our approach outperforms other AQM schemes in terms of delay and jitter maintaining above-average throughput and verifies that DRL applied AQM is an efficient network manager for congestion.


2020 ◽  
Vol 53 (5) ◽  
pp. 637-644
Author(s):  
Fuchun Jiang ◽  
Chenwei Feng ◽  
Chen Zhu ◽  
Yu Sun

In the information society, data explosion has led to more congestion in the core network, dampening the network performance. Random early detection (RED) is currently the standard algorithm for active queue management (AQM) recommended by the Internet Engineering Task Force (IETF). However, RED is particularly sensitive to both service load and algorithm parameters. The algorithm cannot fully utilize the bandwidth at a low service load, and might suffer a long delay at a high service load. This paper designs the reinforcement learning AQM (RLAQM), a simple and practical variant of RED, which controls the average queue length to the predictable value under various network loads, such that the queue size is no longer sensitive to the level of congestion. Q-learning was adopted to adjust the maximum discarding probability, and derive the optimal control strategy. Simulation results indicate that RLAQM can effectively overcome the deficiency of RED and achieve better congestion control; RLAQM improves the network stability and performance in complex environment; it is very easy to migrate from RED to RLAQM on Internet routers: the only operation is to adjust the discarding probability.


2021 ◽  
Author(s):  
Minsu Kim

Internet of Things (IoT) has pervaded most aspects of our life through the Fourth Industrial Revolution. It is expected that a typical family home could contain several hundreds of smart devices by 2022. Current network architecture has been moving to fog/edge architecture to have the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by those devices and reduce queuing delay, novel self-learning network management algorithms are required on fog/edge nodes. For efficient network management, Active Queue Management (AQM) has been proposed which is the intelligent queuing discipline. In this paper, we propose a new AQM based on Deep Reinforcement Learning (DRL) to handle the latency as well as the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline of our scheme, and compare our approach with various AQM schemes by deploying them on the interface of fog/edge node in IoT infrastructure. We simulate the AQM schemes on the different bandwidth and round trip time (RTT) settings, and in the empirical results, our approach outperforms other AQM schemes in terms of delay and jitter maintaining above-average throughput and verifies that DRL applied AQM is an efficient network manager for congestion.


2019 ◽  
Vol 55 (20) ◽  
pp. 1084-1086
Author(s):  
Weiqi Jin ◽  
Rentao Gu ◽  
Yuefeng Ji ◽  
Tao Dong ◽  
Jie Yin ◽  
...  

2001 ◽  
Vol 36 (2-3) ◽  
pp. 203-235 ◽  
Author(s):  
James Aweya ◽  
Michel Ouellette ◽  
Delfin Y Montuno

Sign in / Sign up

Export Citation Format

Share Document