scholarly journals Dynamic deployment of multi‐UAV base stations with deep reinforcement learning

2021 ◽  
Author(s):  
Guanhan Wu ◽  
Weimin Jia ◽  
Jianwei Zhao
Author(s):  
Yi Zhou ◽  
Xiaoyong Ma ◽  
Shuting Hu ◽  
Danyang Zhou ◽  
Nan Cheng ◽  
...  

Author(s):  
Abegaz Mohammed Seid ◽  
Gordon Owusu Boateng ◽  
Stephen Anokye ◽  
Thomas Kwantwi ◽  
Guolin Sun ◽  
...  

Author(s):  
Akindele Segun Afolabi ◽  
Shehu Ahmed ◽  
Olubunmi Adewale Akinola

<span lang="EN-US">Due to the increased demand for scarce wireless bandwidth, it has become insufficient to serve the network user equipment using macrocell base stations only. Network densification through the addition of low power nodes (picocell) to conventional high power nodes addresses the bandwidth dearth issue, but unfortunately introduces unwanted interference into the network which causes a reduction in throughput. This paper developed a reinforcement learning model that assisted in coordinating interference in a heterogeneous network comprising macro-cell and pico-cell base stations. The learning mechanism was derived based on Q-learning, which consisted of agent, state, action, and reward. The base station was modeled as the agent, while the state represented the condition of the user equipment in terms of Signal to Interference Plus Noise Ratio. The action was represented by the transmission power level and the reward was given in terms of throughput. Simulation results showed that the proposed Q-learning scheme improved the performances of average user equipment throughput in the network. In particular, </span><span lang="EN-US">multi-agent systems with a normal learning rate increased the throughput of associated user equipment by a whooping 212.5% compared to a macrocell-only scheme.</span>


Author(s):  
Omar Sami Oubbati ◽  
Mohammed Atiquzzaman ◽  
Abderrahmane Lakas ◽  
Abdullah Baz ◽  
Hosam Alhakami ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4546
Author(s):  
Weiwei Zhao ◽  
Hairong Chu ◽  
Xikui Miao ◽  
Lihong Guo ◽  
Honghai Shen ◽  
...  

Multiple unmanned aerial vehicle (UAV) collaboration has great potential. To increase the intelligence and environmental adaptability of multi-UAV control, we study the application of deep reinforcement learning algorithms in the field of multi-UAV cooperative control. Aiming at the problem of a non-stationary environment caused by the change of learning agent strategy in reinforcement learning in a multi-agent environment, the paper presents an improved multiagent reinforcement learning algorithm—the multiagent joint proximal policy optimization (MAJPPO) algorithm with the centralized learning and decentralized execution. This algorithm uses the moving window averaging method to make each agent obtain a centralized state value function, so that the agents can achieve better collaboration. The improved algorithm enhances the collaboration and increases the sum of reward values obtained by the multiagent system. To evaluate the performance of the algorithm, we use the MAJPPO algorithm to complete the task of multi-UAV formation and the crossing of multiple-obstacle environments. To simplify the control complexity of the UAV, we use the six-degree of freedom and 12-state equations of the dynamics model of the UAV with an attitude control loop. The experimental results show that the MAJPPO algorithm has better performance and better environmental adaptability.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 146264-146272 ◽  
Author(s):  
Han Qie ◽  
Dianxi Shi ◽  
Tianlong Shen ◽  
Xinhai Xu ◽  
Yuan Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document