scholarly journals End-to-End Learning of Semantic Grid Estimation Deep Neural Network with Occupancy Grids

2019 ◽  
Vol 07 (03) ◽  
pp. 171-181
Author(s):  
Özgür Erkent ◽  
Christian Wolf ◽  
Christian Laugier

We propose semantic grid, a spatial 2D map of the environment around an autonomous vehicle consisting of cells which represent the semantic information of the corresponding region such as car, road, vegetation, bikes, etc. It consists of an integration of an occupancy grid, which computes the grid states with a Bayesian filter approach, and semantic segmentation information from monocular RGB images, which is obtained with a deep neural network. The network fuses the information and can be trained in an end-to-end manner. The output of the neural network is refined with a conditional random field. The proposed method is tested in various datasets (KITTI dataset, Inria-Chroma dataset and SYNTHIA) and different deep neural network architectures are compared.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2731
Author(s):  
Yunbo Rao ◽  
Menghan Zhang ◽  
Zhanglin Cheng ◽  
Junmin Xue ◽  
Jiansu Pu ◽  
...  

Accurate segmentation of entity categories is the critical step for 3D scene understanding. This paper presents a fast deep neural network model with Dense Conditional Random Field (DCRF) as a post-processing method, which can perform accurate semantic segmentation for 3D point cloud scene. On this basis, a compact but flexible framework is introduced for performing segmentation to the semantics of point clouds concurrently, contribute to more precise segmentation. Moreover, based on semantics labels, a novel DCRF model is elaborated to refine the result of segmentation. Besides, without any sacrifice to accuracy, we apply optimization to the original data of the point cloud, allowing the network to handle fewer data. In the experiment, our proposed method is conducted comprehensively through four evaluation indicators, proving the superiority of our method.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 725 ◽  
Author(s):  
Yantong Chen ◽  
Yuyang Li ◽  
Junsheng Wang

In remote-sensing images, a detected oil-spill area is usually affected by spot noise and uneven intensity, which leads to poor segmentation of the oil-spill area. This paper introduced a deep semantic segmentation method that combined a deep-convolution neural network with the fully connected conditional random field to form an end-to-end connection. On the basis of Resnet, it first roughly segmented a multisource remote-sensing image as input by the deep convolutional neural network. Then, we used the Gaussian pairwise method and mean-field approximation. The conditional random field was established as the output of the recurrent neural network. The oil-spill area on the sea surface was monitored by the multisource remote-sensing image and was estimated by optical image. We experimentally compared the proposed method with other models on the dataset established by the multisensory satellite image. Results showed that the method improved classification accuracy and captured fine details of the oil-spill area. The mean intersection over the union was 82.1%, and the monitoring effect was obviously improved.


Author(s):  
Baiyu Peng ◽  
Qi Sun ◽  
Shengbo Eben Li ◽  
Dongsuk Kum ◽  
Yuming Yin ◽  
...  

AbstractRecent years have seen the rapid development of autonomous driving systems, which are typically designed in a hierarchical architecture or an end-to-end architecture. The hierarchical architecture is always complicated and hard to design, while the end-to-end architecture is more promising due to its simple structure. This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network, making it possible for the vehicle to learn end-to-end driving by itself. This paper firstly proposes an architecture for the end-to-end lane-keeping task. Unlike the traditional image-only state space, the presented state space is composed of both camera images and vehicle motion information. Then corresponding dueling neural network structure is introduced, which reduces the variance and improves sampling efficiency. Thirdly, the proposed method is applied to The Open Racing Car Simulator (TORCS) to demonstrate its great performance, where it surpasses human drivers. Finally, the saliency map of the neural network is visualized, which indicates the trained network drives by observing the lane lines. A video for the presented work is available online, https://youtu.be/76ciJmIHMD8 or https://v.youku.com/v_show/id_XNDM4ODc0MTM4NA==.html.


Author(s):  
Mostafa H. Tawfeek ◽  
Karim El-Basyouny

Safety Performance Functions (SPFs) are regression models used to predict the expected number of collisions as a function of various traffic and geometric characteristics. One of the integral components in developing SPFs is the availability of accurate exposure factors, that is, annual average daily traffic (AADT). However, AADTs are not often available for minor roads at rural intersections. This study aims to develop a robust AADT estimation model using a deep neural network. A total of 1,350 rural four-legged, stop-controlled intersections from the Province of Alberta, Canada, were used to train the neural network. The results of the deep neural network model were compared with the traditional estimation method, which uses linear regression. The results indicated that the deep neural network model improved the estimation of minor roads’ AADT by 35% when compared with the traditional method. Furthermore, SPFs developed using linear regression resulted in models with statistically insignificant AADTs on minor roads. Conversely, the SPF developed using the neural network provided a better fit to the data with both AADTs on minor and major roads being statistically significant variables. The findings indicated that the proposed model could enhance the predictive power of the SPF and therefore improve the decision-making process since SPFs are used in all parts of the safety management process.


2020 ◽  
Vol 174 ◽  
pp. 505-517
Author(s):  
Qingqiao Hu ◽  
Siyang Yin ◽  
Huiyang Ni ◽  
Yisiyuan Huang

Sign in / Sign up

Export Citation Format

Share Document