scholarly journals Placement Optimization of Aerial Base Stations with Deep Reinforcement Learning

Author(s):  
Jin Qiu ◽  
Jiangbin Lyu ◽  
Liqun Fu
Author(s):  
Akindele Segun Afolabi ◽  
Shehu Ahmed ◽  
Olubunmi Adewale Akinola

<span lang="EN-US">Due to the increased demand for scarce wireless bandwidth, it has become insufficient to serve the network user equipment using macrocell base stations only. Network densification through the addition of low power nodes (picocell) to conventional high power nodes addresses the bandwidth dearth issue, but unfortunately introduces unwanted interference into the network which causes a reduction in throughput. This paper developed a reinforcement learning model that assisted in coordinating interference in a heterogeneous network comprising macro-cell and pico-cell base stations. The learning mechanism was derived based on Q-learning, which consisted of agent, state, action, and reward. The base station was modeled as the agent, while the state represented the condition of the user equipment in terms of Signal to Interference Plus Noise Ratio. The action was represented by the transmission power level and the reward was given in terms of throughput. Simulation results showed that the proposed Q-learning scheme improved the performances of average user equipment throughput in the network. In particular, </span><span lang="EN-US">multi-agent systems with a normal learning rate increased the throughput of associated user equipment by a whooping 212.5% compared to a macrocell-only scheme.</span>


2021 ◽  
Author(s):  
Yanzhi Hu ◽  
Fengbin Zhang ◽  
Tian Tian ◽  
Zhiyong Shi ◽  
Gang Yu ◽  
...  

Author(s):  
Yuansheng Wu ◽  
Guanqun Zhao ◽  
Dadong Ni ◽  
Junyi Du

AbstractIt has been widely acknowledged that network slicing is a key architectural technology to accommodate diversified services for the next generation network (5G). By partitioning the underlying network into multiple dedicated logical networks, 5G can support a variety of extreme business service needs. As network slicing is implemented in radio access networks (RAN), user handoff becomes much more complicated than that in traditional mobile networks. As both physical resource constraints of base stations and logical connection constraints of network slices should be considered in handoff decision, an intelligent handoff policy becomes imperative. In this paper, we model the handoff in RAN slicing as a Markov decision process and resort to deep reinforcement learning to pursue long-term performance improvement in terms of user quality of service and network throughput. The effectiveness of our proposed handoff policy is validated via simulation experiments.


2021 ◽  
Author(s):  
Abdeladim Sadiki ◽  
Jamal Bentahar ◽  
Rachida Dssouli ◽  
Abdeslam En-Nouaary

Multi-access Edge Computing (MEC) has recently emerged as a potential technology to serve the needs of mobile devices (MDs) in 5G and 6G cellular networks. By offloading tasks to high-performance servers installed at the edge of the wireless networks, resource-limited MDs can cope with the proliferation of the recent computationally-intensive applications. In this paper, we study the computation offloading problem in a massive multiple-input multiple-output (MIMO)-based MEC system where the base stations are equipped with a large number of antennas. Our objective is to minimize the power consumption and offloading delay at the MDs under the stochastic system environment. To this end, we formulate the problem as a Markov Decision Process (MDP) and propose two Deep Reinforcement Learning (DRL) strategies to learn the optimal offloading policy without any prior knowledge of the environment dynamics. First, a Deep Q-Network (DQN) strategy to solve the curse of the state space explosion is analyzed. Then, a more general Proximal Policy Optimization (PPO) strategy to solve the problem of discrete action space is introduced. Simulation results show that the proposed DRL-based strategies outperform the baseline and state-of-the-art algorithms. Moreover, our PPO algorithm exhibits stable performance and efficient offloading results compared to the benchmark DQN strategy.


2021 ◽  
Author(s):  
Yuansheng Wu ◽  
Guanqun Zhao ◽  
Dadong Ni ◽  
Junyi Du

Abstract It has been widely acknowledged that network slicing is a key architectural technology to accommodate diversified services for the next generation network (5G). By partitioning the underlying network into multiple dedicated logical networks, 5G can support a variety of extreme business service needs. As network slicing is implemented in radio access networks (RAN), user handoff becomes much more complicated than that in traditional mobile networks. As both physical resource constraints of base stations (BSs) and logical connection constraints of network slices should be considered in handoff decision, an intelligent handoff policy becomes imperative. In this paper, we model the handoff in RAN slicing as a Markov decision process (MDP) and resort to deep reinforcement learning to pursue long-term performance improvement in terms of user quality of Service (QoS) and network throughput. The effectiveness of our proposed handoff policy is validated via simulation experiments.


Sign in / Sign up

Export Citation Format

Share Document