surface extraction
Recently Published Documents


TOTAL DOCUMENTS

197
(FIVE YEARS 31)

H-INDEX

23
(FIVE YEARS 3)

Author(s):  
Yancheng Shi ◽  
Ping Yan ◽  
Yingtao Su ◽  
Dayuan Wu ◽  
You Guo ◽  
...  

2021 ◽  
Vol 13 (2) ◽  
pp. 239
Author(s):  
Zhenfeng Shao ◽  
Zifan Zhou ◽  
Xiao Huang ◽  
Ya Zhang

Automatic extraction of the road surface and road centerline from very high-resolution (VHR) remote sensing images has always been a challenging task in the field of feature extraction. Most existing road datasets are based on data with simple and clear backgrounds under ideal conditions, such as images derived from Google Earth. Therefore, the studies on road surface extraction and road centerline extraction under complex scenes are insufficient. Meanwhile, most existing efforts addressed these two tasks separately, without considering the possible joint extraction of road surface and centerline. With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.


Sign in / Sign up

Export Citation Format

Share Document