A deep reinforcement learning for user association and power control in heterogeneous networks

2020 ◽  
Vol 102 ◽  
pp. 102069 ◽  
Author(s):  
Hui Ding ◽  
Feng Zhao ◽  
Jie Tian ◽  
Dongyang Li ◽  
Haixia Zhang
Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5307 ◽  
Author(s):  
Shuang Zhang ◽  
Guixia Kang

To support a vast number of devices with less energy consumption, we propose a new user association and power control scheme for machine to machine enabled heterogeneous networks with non-orthogonal multiple access (NOMA), where a mobile user (MU) acting as a machine-type communication gateway can decode and forward both the information of machine-type communication devices and its own data to the base station (BS) directly. MU association and power control are jointly considered in the formulated as optimization problem for energy efficiency (EE) maximization under the constraints of minimum data rate requirements of MUs. A many-to-one MU association matching algorithm is firstly proposed based on the theory of matching game. By taking swap matching operations among MUs, BSs, and sub-channels, the original problem can be solved by dealing with the EE maximization for each sub-channel. Then, two power control algorithms are proposed, where the tools of sequential optimization, fractional programming, and exhaustive search have been employed. Simulation results are provided to demonstrate the optimality properties of our algorithms under different parameter settings.


2018 ◽  
Vol 7 (4) ◽  
pp. 526-529 ◽  
Author(s):  
Xietian Huang ◽  
Wei Xu ◽  
Hong Shen ◽  
Hua Zhang ◽  
Xiaohu You

2019 ◽  
Vol 18 (8) ◽  
pp. 3933-3947 ◽  
Author(s):  
Roohollah Amiri ◽  
Mojtaba Ahmadi Almasi ◽  
Jeffrey G. Andrews ◽  
Hani Mehrpouyan

2021 ◽  
Vol 11 (9) ◽  
pp. 4135
Author(s):  
Chi-Kai Hsieh ◽  
Kun-Lin Chan ◽  
Feng-Tsun Chien

This paper studies the problem of joint power allocation and user association in wireless heterogeneous networks (HetNets) with a deep reinforcement learning (DRL)-based approach. This is a challenging problem since the action space is hybrid, consisting of continuous actions (power allocation) and discrete actions (device association). Instead of quantizing the continuous space (i.e., possible values of powers) into a set of discrete alternatives and applying traditional deep reinforcement approaches such as deep Q learning, we propose working on the hybrid space directly by using the novel parameterized deep Q-network (P-DQN) to update the learning policy and maximize the average cumulative reward. Furthermore, we incorporate the constraints of limited wireless backhaul capacity and the quality-of-service (QoS) of each user equipment (UE) into the learning process. Simulation results show that the proposed P-DQN outperforms the traditional approaches, such as the DQN and distance-based association, in terms of energy efficiency while satisfying the QoS and backhaul capacity constraints. The improvement in the energy efficiency of the proposed P-DQN on average may reach 77.6% and 140.6% over the traditional DQN and distance-based association approaches, respectively, in a HetNet with three SBS and five UEs.


Sign in / Sign up

Export Citation Format

Share Document