scholarly journals A Resource Allocation Algorithm for Ultra-Dense Networks Based on Deep Reinforcement Learning

Author(s):  
Huashuai Zhang ◽  
Tingmei Wang ◽  
Haiwei Shen

The resource optimization of ultra-dense networks (UDNs) is critical to meet the huge demand of users for wireless data traffic. But the mainstream optimization algorithms have many problems, such as the poor optimization effect, and high computing load. This paper puts forward a wireless resource allocation algorithm based on deep reinforcement learning (DRL), which aims to maximize the total throughput of the entire network and transform the resource allocation problem into a deep Q-learning process. To effectively allocate resources in UDNs, the DRL algorithm was introduced to improve the allocation efficiency of wireless resources; the authors adopted the resource allocation strategy of the deep Q-network (DQN), and employed empirical repetition and target network to overcome the instability and divergence of the results caused by the previous network state, and to solve the overestimation of the Q value. Simulation results show that the proposed algorithm can maximize the total throughput of the network, while making the network more energy-efficient and stable. Thus, it is very meaningful to introduce the DRL to the research of UDN resource allocation.

2019 ◽  
Vol 9 (7) ◽  
pp. 1391 ◽  
Author(s):  
Xiangwei Bai ◽  
Qing Li ◽  
Yanqun Tang

In this paper, a low-complexity multi-cell resource allocation algorithm with a near-optimal system throughput is proposed to resolve the conflict between the high system throughput and low complexity of indoor visible light communication ultra-dense networks (VLC-UDNs). First, by establishing the optimal model of the resource allocation problem in each cell, we concluded that the problem is a convex optimization problem. After this, the analytic formula of the normalized scaling factor of each terminal for resource allocation is derived after reasonable approximate treatment. The resource allocation algorithm is subsequently proposed. Finally, the complexity analysis shows that the proposed algorithm has polynomial complexity, which is lower than the classical optimal inter-point method. The simulation results show that the proposed method achieves a improvement of 57% in performance in terms of the average system throughput and improvement of 67% in performance in terms of the quality of service (QoS) guarantee against the required data rate proportion allocation (RDR-PA) method.


2019 ◽  
Vol 8 (3) ◽  
pp. 5930-5938

With the recent advancements and due to the rapid growth of LTE networks, Machine Type Communication (MTC) plays a vital role in the characterization of Internet of Things (IOT).Human-to-Human (H2H) communication and MTC are the two different types of communication handled by LTE-A networks. Due to the co-existence of H2H communication and MTC in LTE-A networks, a serious challenge may arise for scheduling critical MTC with H2H communication networks. To maintain the Quality of Service (QoS) requirements for H2H communication and to provide data traffic for MTC networks LTE networks faces a serious challenge for allocating the resources blocks to the users. In this paper we propose a resource allocation algorithm for optimizing the problems faced by critical MTC and H2H communication networks by maintaining the QoS requirements from a cross-layer design perspective. A novel cross layer memtic based resource allocation algorithm is presented in this paper by investigating the resource allocation problem for different combinations of Channel Quality Indicator (CQI) modes for critical MTCDs and H2H UEs. The Performance and computational complexity of the proposed algorithm in different cases of CQI is measured in terms of cell throughput and probability of delay bound violation (PBDV) is analyzed and the simulations results shows that the proposed system is more efficient compared to other resource allocation algorithms.


Sign in / Sign up

Export Citation Format

Share Document