scholarly journals Reinforcement Learning for Energy Optimization with 5G Communications in Vehicular Social Networks

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2361 ◽  
Author(s):  
Hyebin Park ◽  
Yujin Lim

Increased data traffic resulting from the increase in the deployment of connected vehicles has become relevant in vehicular social networks (VSNs). To provide efficient communication between connected vehicles, researchers have studied device-to-device (D2D) communication. D2D communication not only reduces the energy consumption and loads of the system but also increases the system capacity by reusing cellular resources. However, D2D communication is highly affected by interference and therefore requires interference-management techniques, such as mode selection and power control. To make an optimal mode selection and power control, it is necessary to apply reinforcement learning that considers a variety of factors. In this paper, we propose a reinforcement-learning technique for energy optimization with fifth-generation communication in VSNs. To achieve energy optimization, we use centralized Q-learning in the system and distributed Q-learning in the vehicles. The proposed algorithm learns to maximize the energy efficiency of the system by adjusting the minimum signal-to-interference plus noise ratio to guarantee the outage probability. Simulations were performed to compare the performance of the proposed algorithm with that of the existing mode-selection and power-control algorithms. The proposed algorithm performed the best in terms of system energy efficiency and achievable data rate.

2014 ◽  
Vol 556-562 ◽  
pp. 1766-1769 ◽  
Author(s):  
Lian Fen Huang ◽  
Bin Wen ◽  
Zhi Bin Gao ◽  
Hong Xiang Cai ◽  
Yu Jie Li

Femtocell is introduced to improve indoor coverage, which is beneficial for both users and operators. But it will also inevitably produce interference management issues in the heterogeneous network which consists of femtocells and macrocells. In this paper, a decentralized Q-learning-based power control strategy is proposed, comparing with homogenous power allocation and smart power control (SPC) algorithm. Simulation results have shown that Q-learning-based power control algorithm can implement the compromise of capacity between macrocells and femtocells, and greatly enhance energy efficiency of the whole network.


2021 ◽  
Vol 19 ◽  
pp. 215-222
Author(s):  
Nusrat Jahan ◽  
Ashikur Rahman Khan ◽  
Main Uddin ◽  
Mahamudul Hasan Rana

A single bidirectional link is used to allow communication between two devices in the device-todevice (D2D) communication system. D2D technology has to implement with the current cellular system. As both users D2D and cellular use the same licensed spectrum for transmission the chances of interferences increases. It is challenging for researchers to find out the proper mechanism to decrease interference and maximize performances. In this paper, we try to survey the challenges and their solutions to enable D2D communication in the cellular network with low interference. Here we describe the peer discover, mode selection process and interference management with power control and resource allocation. Finally, we can say that with proper power control, spectrum slicing and resource allocation we can mitigate co-tier and cross-tier interferences.


Sign in / Sign up

Export Citation Format

Share Document