Edge-Reinforced Convolutional Neural Network for Road Detection in Very-High-Resolution Remote Sensing Imagery

2020 ◽  
Vol 86 (3) ◽  
pp. 153-160
Author(s):  
Xiaoyan Lu ◽  
Yanfei Zhong ◽  
Zhuo Zheng ◽  
Ji Zhao ◽  
Liangpei Zhang

Road detection in very-high-resolution remote sensing imagery is a hot research topic. However, the high resolution results in highly complex data distributions, which lead to much noise for road detection—for example, shadows and occlusions caused by disturbance on the roadside make it difficult to accurately recognize road. In this article, a novel edge-reinforced convolutional neural network, combined with multiscale feature extraction and edge reinforcement, is proposed to alleviate this problem. First, multiscale feature extraction is used in the center part of the proposed network to extract multiscale context information. Then edge reinforcement, applying a simplified U-Net to learn additional edge information, is used to restore the road information. The two operations can be used with different convolutional neural networks. Finally, two public road data sets are adopted to verify the effectiveness of the proposed approach, with experimental results demonstrating its superiority.

2018 ◽  
Vol 10 (9) ◽  
pp. 1461 ◽  
Author(s):  
Yongyang Xu ◽  
Zhong Xie ◽  
Yaxing Feng ◽  
Zhanlong Chen

The road network plays an important role in the modern traffic system; as development occurs, the road structure changes frequently. Owing to the advancements in the field of high-resolution remote sensing, and the success of semantic segmentation success using deep learning in computer version, extracting the road network from high-resolution remote sensing imagery is becoming increasingly popular, and has become a new tool to update the geospatial database. Considering that the training dataset of the deep convolutional neural network will be clipped to a fixed size, which lead to the roads run through each sample, and that different kinds of road types have different widths, this work provides a segmentation model that was designed based on densely connected convolutional networks (DenseNet) and introduces the local and global attention units. The aim of this work is to propose a novel road extraction method that can efficiently extract the road network from remote sensing imagery with local and global information. A dataset from Google Earth was used to validate the method, and experiments showed that the proposed deep convolutional neural network can extract the road network accurately and effectively. This method also achieves a harmonic mean of precision and recall higher than other machine learning and deep learning methods.


2021 ◽  
Vol 13 (2) ◽  
pp. 239
Author(s):  
Zhenfeng Shao ◽  
Zifan Zhou ◽  
Xiao Huang ◽  
Ya Zhang

Automatic extraction of the road surface and road centerline from very high-resolution (VHR) remote sensing images has always been a challenging task in the field of feature extraction. Most existing road datasets are based on data with simple and clear backgrounds under ideal conditions, such as images derived from Google Earth. Therefore, the studies on road surface extraction and road centerline extraction under complex scenes are insufficient. Meanwhile, most existing efforts addressed these two tasks separately, without considering the possible joint extraction of road surface and centerline. With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.


Sign in / Sign up

Export Citation Format

Share Document