Distributed Deep Learning for Power Control in D2D Networks with Outdated Information

Author(s):  
Jiaqi Shi ◽  
Qianqian Zhang ◽  
Ying-Chang Liang ◽  
Xiaojun Yuan
Keyword(s):  
Energies ◽  
2019 ◽  
Vol 12 (22) ◽  
pp. 4300 ◽  
Author(s):  
Hoon Lee ◽  
Han Seung Jang ◽  
Bang Chul Jung

Achieving energy efficiency (EE) fairness among heterogeneous mobile devices will become a crucial issue in future wireless networks. This paper investigates a deep learning (DL) approach for improving EE fairness performance in interference channels (IFCs) where multiple transmitters simultaneously convey data to their corresponding receivers. To improve the EE fairness, we aim to maximize the minimum EE among multiple transmitter–receiver pairs by optimizing the transmit power levels. Due to fractional and max-min formulation, the problem is shown to be non-convex, and, thus, it is difficult to identify the optimal power control policy. Although the EE fairness maximization problem has been recently addressed by the successive convex approximation framework, it requires intensive computations for iterative optimizations and suffers from the sub-optimality incurred by the non-convexity. To tackle these issues, we propose a deep neural network (DNN) where the procedure of optimal solution calculation, which is unknown in general, is accurately approximated by well-designed DNNs. The target of the DNN is to yield an efficient power control solution for the EE fairness maximization problem by accepting the channel state information as an input feature. An unsupervised training algorithm is presented where the DNN learns an effective mapping from the channel to the EE maximizing power control strategy by itself. Numerical results demonstrate that the proposed DNN-based power control method performs better than a conventional optimization approach with much-reduced execution time. This work opens a new possibility of using DL as an alternative optimization tool for the EE maximizing design of the next-generation wireless networks.


2019 ◽  
Vol 23 (11) ◽  
pp. 2004-2007 ◽  
Author(s):  
Han Seung Jang ◽  
Hoon Lee ◽  
Tony Q. S. Quek

Author(s):  
Nuwanthika Rajapaksha ◽  
K. B. Shashika Manosha ◽  
Nandana Rajatheva ◽  
Matti Latva-Aho

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Muhammad Muzamil Aslam ◽  
Liping Du ◽  
Zahoor Ahmed ◽  
Muhammad Nauman Irshad ◽  
Hassan Azeem

The cognitive radio network (CRN) is aimed at strengthening the system through learning and adjusting by observing and measuring the available resources. Due to spectrum sensing capability in CRN, it should be feasible and fast. The capability to observe and reconfigure is the key feature of CRN, while current machine learning techniques work great when incorporated with system adaption algorithms. This paper describes the consensus performance and power control of spectrum sharing in CRN. (1) CRN users are considered noncooperative users such that the power control policy of a primary user (PU) is predefined keeping the secondary user (SU) unaware of PU’s power control policy. For a more efficient spectrum sharing performance, a deep learning power control strategy has been developed. This algorithm is based on the received signal strength at CRN nodes. (2) An agent-based approach is introduced for the CR user’s consensus performance. (3) All agents reached their steady-state value after nearly 100 seconds. However, the settling time is large. Sensing delay of 0.4 second inside whole operation is identical. The assumed method is enough for the representation of large-scale sensing delay in the CR network.


Sign in / Sign up

Export Citation Format

Share Document