scholarly journals Traffic Congestion Prediction using Deep Reinforcement Learning in Vehicular Ad-Hoc Networks (VANETS)

2021 ◽  
Vol 13 (04) ◽  
pp. 01-19
Author(s):  
Chantakarn Pholpol ◽  
Teerapat Sanguankotchakorn

In recent years, a new wireless network called vehicular ad-hoc network (VANET), has become a popular research topic. VANET allows communication among vehicles and with roadside units by providing information to each other, such as vehicle velocity, location and direction. In general, when many vehicles likely to use the common route to proceed to the same destination, it can lead to a congested route that should be avoided. It may be better if vehicles are able to predict accurately the traffic congestion and then avoid it. Therefore, in this work, the deep reinforcement learning in VANET to enhance the ability to predict traffic congestion on the roads is proposed. Furthermore, different types of neural networks namely Convolutional Neural Network (CNN), Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) are investigated and compared in this deep reinforcement learning model to discover the most effective one. Our proposed method is tested by simulation. The traffic scenarios are created using traffic simulator called Simulation of Urban Mobility (SUMO) before integrating with deep reinforcement learning model. The simulation procedures, as well as the programming used, are described in detail. The performance of our proposed method is evaluated using two metrics; the average travelling time delay and average waiting time delay of vehicles. According to the simulation results, the average travelling time delay and average waiting time delay are gradually improved over the multiple runs, since our proposed method receives feedback from the environment. In addition, the results without and with three different deep learning algorithms, i.e., CNN, MLP and LSTM are compared. It is obvious that the deep reinforcement learning model works effectively when traffic density is neither too high nor too low. In addition, it can be concluded that the effective algorithms for traffic congestion prediction models in descending order are MLP, CNN, and LSTM, respectively.

Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 449
Author(s):  
Sifat Rezwan ◽  
Wooyeol Choi

Flying ad-hoc networks (FANET) are one of the most important branches of wireless ad-hoc networks, consisting of multiple unmanned air vehicles (UAVs) performing assigned tasks and communicating with each other. Nowadays FANETs are being used for commercial and civilian applications such as handling traffic congestion, remote data collection, remote sensing, network relaying, and delivering products. However, there are some major challenges, such as adaptive routing protocols, flight trajectory selection, energy limitations, charging, and autonomous deployment that need to be addressed in FANETs. Several researchers have been working for the last few years to resolve these problems. The main obstacles are the high mobility and unpredictable changes in the topology of FANETs. Hence, many researchers have introduced reinforcement learning (RL) algorithms in FANETs to overcome these shortcomings. In this study, we comprehensively surveyed and qualitatively compared the applications of RL in different scenarios of FANETs such as routing protocol, flight trajectory selection, relaying, and charging. We also discuss open research issues that can provide researchers with clear and direct insights for further research.


2020 ◽  
Author(s):  
Ben Lonnqvist ◽  
Micha Elsner ◽  
Amelia R. Hunt ◽  
Alasdair D F Clarke

Experiments on the efficiency of human search sometimes reveal large differences between individual participants. We argue that reward-driven task-specific learning may account for some of this variation. In a computational reinforcement learning model of this process, a wide variety of strategies emerge, despite all simulated participants having the same visual acuity. We conduct a visual search experiment, and replicate previous findings that participant preferences about where to search are highly varied, with a distribution comparable to the simulated results. Thus, task-specific learning is an under-explored mechanism by which large inter-participant differences can arise.


Sign in / Sign up

Export Citation Format

Share Document