Traffic State Prediction Using Convolutional Neural Network

Author(s):  
Ratchanon Toncharoen ◽  
Mongkut Piantanakulchai
2021 ◽  
Vol 11 (23) ◽  
pp. 11530
Author(s):  
Pangwei Wang ◽  
Xiao Liu ◽  
Yunfeng Wang ◽  
Tianren Wang ◽  
Juan Zhang

Real-time and reliable short-term traffic state prediction is one of the most critical technologies in intelligent transportation systems (ITS). However, the traffic state is generally perceived by single sensor in existing studies, which is difficult to satisfy the requirement of real-time prediction in complex traffic networks. In this paper, a short-term traffic prediction model based on complex neural network is proposed under the environment of vehicle-to-everything (V2X) communication systems. Firstly, a traffic perception system of multi-source sensors based on V2X communication is proposed and designed. A mobile edge computing (MEC)-assisted architecture is then introduced in a V2X network to facilitate perceptual and computational abilities of the system. Moreover, the graph convolutional network (GCN), the gated recurrent unit (GRU), and the soft-attention mechanism are combined to extract spatiotemporal features of traffic state and integrate them for future prediction. Finally, an intelligent roadside test platform is demonstrated for perception and computation of real-time traffic state. The comparison experiments show that the proposed method can significantly improve the prediction accuracy by comparing with the existing neural network models, which consider one of the spatiotemporal features. In particular, for comparison results of the traffic state prediction and the error value of root mean squared error (RMSE) is reduced by 39.53%, which is the greatest reduction in error occurrences by comparing with the GCN and GRU models in 5, 10, 15 and 30 minutes respectively.


2019 ◽  
Vol 29 (10) ◽  
pp. 103125 ◽  
Author(s):  
Dongwei Xu ◽  
Hongwei Dai ◽  
Yongdong Wang ◽  
Peng Peng ◽  
Qi Xuan ◽  
...  

2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document