scholarly journals Deep Reinforcement Learning-based Adaptive Handover Mechanism for VLC in a Hybrid 6G Network Architecture

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Liqiang Wang ◽  
Dahai Han ◽  
Min Zhang ◽  
Danshi Wang ◽  
Zhiguo Zhang
2021 ◽  
Author(s):  
Minsu Kim

Internet of Things (IoT) has pervaded most aspects of our life through the Fourth Industrial Revolution. It is expected that a typical family home could contain several hundreds of smart devices by 2022. Current network architecture has been moving to fog/edge architecture to have the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by those devices and reduce queuing delay, novel self-learning network management algorithms are required on fog/edge nodes. For efficient network management, Active Queue Management (AQM) has been proposed which is the intelligent queuing discipline. In this paper, we propose a new AQM based on Deep Reinforcement Learning (DRL) to handle the latency as well as the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline of our scheme, and compare our approach with various AQM schemes by deploying them on the interface of fog/edge node in IoT infrastructure. We simulate the AQM schemes on the different bandwidth and round trip time (RTT) settings, and in the empirical results, our approach outperforms other AQM schemes in terms of delay and jitter maintaining above-average throughput and verifies that DRL applied AQM is an efficient network manager for congestion.


2021 ◽  
Vol 11 (18) ◽  
pp. 8419
Author(s):  
Jiang Zhao ◽  
Jiaming Sun ◽  
Zhihao Cai ◽  
Longhong Wang ◽  
Yingxun Wang

To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computing are popular in state-of-the-art work, which often consist of several separated modules with respective complicated algorithms. Most methods depend on handcrafted designs and prior models with little capacity for adaptation and generalization. Inspired by the research on deep reinforcement learning, this paper proposes a new end-to-end autonomous control method to simplify the separate modules in the traditional control pipeline into a single neural network. An image-based reinforcement learning framework is established, depending on the design of the network architecture and the reward function. Training is performed with model-free algorithms developed according to the specific mission, and the control policy network can map the input image directly to the continuous actuator control command. A simulation environment for the scenario of UAV landing was built. In addition, the results under different typical cases, including both the small and large initial lateral or heading angle offsets, show that the proposed end-to-end method is feasible for perception-based autonomous control.


Author(s):  
Zhaoyang Yang ◽  
Kathryn Merrick ◽  
Hussein Abbass ◽  
Lianwen Jin

In this paper, we propose a deep reinforcement learning algorithm to learn multiple tasks concurrently. A new network architecture is proposed in the algorithm which reduces the number of parameters needed by more than 75% per task compared to typical single-task deep reinforcement learning algorithms. The proposed algorithm and network fuse images with sensor data and were tested with up to 12 movement-based control tasks on a simulated Pioneer 3AT robot equipped with a camera and range sensors. Results show that the proposed algorithm and network can learn skills that are as good as the skills learned by a comparable single-task learning algorithm. Results also show that learning performance is consistent even when the number of tasks and the number of constraints on the tasks increased.


2021 ◽  
Vol 11 (21) ◽  
pp. 10337
Author(s):  
Junkai Ren ◽  
Yujun Zeng ◽  
Sihang Zhou ◽  
Yichuan Zhang

Scaling end-to-end learning to control robots with vision inputs is a challenging problem in the field of deep reinforcement learning (DRL). While achieving remarkable success in complex sequential tasks, vision-based DRL remains extremely data-inefficient, especially when dealing with high-dimensional pixels inputs. Many recent studies have tried to leverage state representation learning (SRL) to break through such a barrier. Some of them could even help the agent learn from pixels as efficiently as from states. Reproducing existing work, accurately judging the improvements offered by novel methods, and applying these approaches to new tasks are vital for sustaining this progress. However, the demands of these three aspects are seldom straightforward. Without significant criteria and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the previous methods are meaningful. For this reason, we conducted ablation studies on hyperparameters, embedding network architecture, embedded dimension, regularization methods, sample quality and SRL methods to compare and analyze their effects on representation learning and reinforcement learning systematically. Three evaluation metrics are summarized, including five baseline algorithms (including both value-based and policy-based methods) and eight tasks are adopted to avoid the particularity of each experiment setting. We highlight the variability in reported methods and suggest guidelines to make future results in SRL more reproducible and stable based on a wide number of experimental analyses. We aim to spur discussion about how to assure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.


2021 ◽  
Author(s):  
Zhaolei Wang ◽  
Jun Zhang ◽  
Yue Li ◽  
Qinghai Gong ◽  
Wuyi Luo ◽  
...  

Author(s):  
Pawel Ladosz ◽  
Eseoghene Ben-Iwhiwhu ◽  
Jeffery Dick ◽  
Nicholas Ketz ◽  
Soheil Kolouri ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document