Energy-Efficient Channel Switching in Cognitive Radio Networks: A Reinforcement Learning Approach

2020 ◽  
Vol 69 (10) ◽  
pp. 12359-12362
Author(s):  
Haichuan Ding ◽  
Xuanheng Li ◽  
Ying Ma ◽  
Yuguang Fang
Energies ◽  
2019 ◽  
Vol 12 (14) ◽  
pp. 2829 ◽  
Author(s):  
Yihang Du ◽  
Ying Xu ◽  
Lei Xue ◽  
Lijia Wang ◽  
Fan Zhang

Deep reinforcement learning (DRL) has been successfully used for the joint routing and resource management in large-scale cognitive radio networks. However, it needs lots of interactions with the environment through trial and error, which results in large energy consumption and transmission delay. In this paper, an apprenticeship learning scheme is proposed for the energy-efficient cross-layer routing design. Firstly, to guarantee energy efficiency and compress huge action space, a novel concept called dynamic adjustment rating is introduced, which regulates transmit power efficiently with multi-level transition mechanism. On top of this, the Prioritized Memories Deep Q-learning from Demonstrations (PM-DQfD) is presented to speed up the convergence and reduce the memory occupation. Then the PM-DQfD is applied to the cross-layer routing design for power efficiency improvement and routing latency reduction. Simulation results confirm that the proposed method achieves higher energy efficiency, shorter routing latency and larger packet delivery ratio compared to traditional algorithms such as Cognitive Radio Q-routing (CRQ-routing), Prioritized Memories Deep Q-Network (PM-DQN), and Conjecture Based Multi-agent Q-learning Scheme (CBMQ).


Sign in / Sign up

Export Citation Format

Share Document