Markov decision process-based analysis of rechargeable nodes in wireless sensor networks

Author(s):  
Sudip Misra ◽  
Rashmi Ranjan Rout ◽  
T. Raghu Vamsi Krishna ◽  
Patel Manish Kumar Manilal ◽  
Mohammad S. Obaidat
2015 ◽  
Vol 17 (3) ◽  
pp. 1239-1267 ◽  
Author(s):  
Mohammad Abu Alsheikh ◽  
Dinh Thai Hoang ◽  
Dusit Niyato ◽  
Hwee-Pink Tan ◽  
Shaowei Lin

2021 ◽  
Author(s):  
Haitham Afifi

<div>We develop a Deep Reinforcement Learning (DeepRL) based multi-agent algorithm to efficiently control</div><div>autonomous vehicles in the context of Wireless Sensor Networks (WSNs). In contrast to other applications, WSNs</div><div>have two metrics for performance evaluation. First, quality of information (QoI) which is used to measure the</div><div>quality of sensed data. Second, quality of service (QoS) which is used to measure the network’s performance. As</div><div>a use case, we consider wireless acoustic sensor networks; a group of speakers move inside a room and there</div><div>are microphones installed on vehicles for streaming the audio data. We formulate an appropriate Markov Decision</div><div>Process (MDP) and present, besides a centralized solution, a multi-agent Deep Q-learning solution to control the vehicles. We compare the proposed solutions to a naive heuristic and two different real-world implementations: microphones being hold or preinstalled. We show using simulations that the performance of autonomous vehicles in terms of QoI and QoS is better than the real-world implementation and the proposed heuristic. Additionally, we provide theoretical analysis of the performance with respect to WSNs dynamics, such as speed, rooms dimensions and speaker’s talking time.</div>


Sign in / Sign up

Export Citation Format

Share Document