scholarly journals Deep Reinforcement Learning for 5G Networks: Joint Beamforming, Power Control, and Interference Coordination

2020 ◽  
Vol 68 (3) ◽  
pp. 1581-1592 ◽  
Author(s):  
Faris B. Mismar ◽  
Brian L. Evans ◽  
Ahmed Alkhateeb
Author(s):  
Francisco Hugo Costa Neto ◽  
Daniel Costa Araujo ◽  
Mateus Pontes Mota ◽  
Tarcisio Maciel ◽  
Andre L. F. De Almeida

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chen Sun ◽  
Shiyi Wu ◽  
Bo Zhang

In future heterogeneous cellular networks with small cells, such as D2D and relay, interference coordination between macro cells and small cells should be addressed through effective resource allocation and power control. The two-step Stackelberg game is a widely used and feasible model for resource allocation and power control problem formulation. Both in the follower games for small cells and in the leader games for the macro cell, the cost parameters are a critical variable for the performance of Stackelberg game. Previous studies have failed to adequately address the optimization of cost parameters. This paper presents a reinforcement learning approach for effectively training cost parameters for better system performance. Furthermore, a two-stage pretraining plus ε -greedy algorithm is proposed to accelerate the convergence of reinforcement learning. The simulation results can demonstrate that compared with the three beachmarking algorithms, the proposed algorithm can enhance average throughput of all users and cellular users by up to 7% and 9.7%, respectively.


2021 ◽  
Author(s):  
Tianjiao Pu ◽  
Fei Jiao ◽  
Yifan Cao ◽  
Zhicheng Liu ◽  
Chao Qiu ◽  
...  

Abstract As one of the core components that improve transportation, generation, delivery, and electricity consumption in terms of protection and reliability, smart grid can provide full visibility and universal control of power assets and services, provide resilience to system anomalies and enable new ways to supply and trade resources in a coordinated manner. In current power grids, a large number of power supply and demand components, sensing and control devices generate lots of requirements, e.g., data perception, information transmission, business processing and real-time control, while existing centralized cloud computing paradigm is hard to address issues and challenges such as rapid response and local autonomy. Specifically, the trend of micro grid computing is one of the key challenges in smart grid, because a lot of in the power grid, diverse, adjustable supply components and more complex, optimization of difficulty is also relatively large, whereas traditional, manual, centralized methods are often dependent on expert experience, and requires a lot of manpower. Furthermore, the application of edge intelligence to power flow adjustment in smart grid is still in its infancy. In order to meet this challenge, we propose a power control framework combining edge computing and machine learning, which makes full use of edge nodes to sense network state and power control, so as to achieve the goal of fast response and local autonomy. Furthermore, we design and implement parameters such as state, action and reward by using deep reinforcement learning to make intelligent control decisions, aiming at the problem that flow calculation often does not converge. The simulation results demonstrate the effectiveness of our method with successful dynamic power flow calculating and stable operation under various power conditions.


Sign in / Sign up

Export Citation Format

Share Document