Improved Edge Detection Algorithm of High-Resolution Remote Sensing Images based on Fast Guided Filter

Author(s):  
Hanmin Ye ◽  
Min Ding ◽  
Shili Yan
Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1465 ◽  
Author(s):  
Lili Zhang ◽  
Jisen Wu ◽  
Yu Fan ◽  
Hongmin Gao ◽  
Yehong Shao

In this paper, we consider building extraction from high spatial resolution remote sensing images. At present, most building extraction methods are based on artificial features. However, the diversity and complexity of buildings mean that building extraction methods still face great challenges, so methods based on deep learning have recently been proposed. In this paper, a building extraction framework based on a convolution neural network and edge detection algorithm is proposed. The method is called Mask R-CNN Fusion Sobel. Because of the outstanding achievement of Mask R-CNN in the field of image segmentation, this paper improves it and then applies it in remote sensing image building extraction. Our method consists of three parts. First, the convolutional neural network is used for rough location and pixel level classification, and the problem of false and missed extraction is solved by automatically discovering semantic features. Second, Sobel edge detection algorithm is used to segment building edges accurately so as to solve the problem of edge extraction and the integrity of the object of deep convolutional neural networks in semantic segmentation. Third, buildings are extracted by the fusion algorithm. We utilize the proposed framework to extract the building in high-resolution remote sensing images from Chinese satellite GF-2, and the experiments show that the average value of IOU (intersection over union) of the proposed method was 88.7% and the average value of Kappa was 87.8%, respectively. Therefore, our method can be applied to the recognition and segmentation of complex buildings and is superior to the classical method in accuracy.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6742
Author(s):  
Yongshi Jie ◽  
Xianhua Ji ◽  
Anzhi Yue ◽  
Jingbo Chen ◽  
Yupeng Deng ◽  
...  

Distributed photovoltaic power stations are an effective way to develop and utilize solar energy resources. Using high-resolution remote sensing images to obtain the locations, distribution, and areas of distributed photovoltaic power stations over a large region is important to energy companies, government departments, and investors. In this paper, a deep convolutional neural network was used to extract distributed photovoltaic power stations from high-resolution remote sensing images automatically, accurately, and efficiently. Based on a semantic segmentation model with an encoder-decoder structure, a gated fusion module was introduced to address the problem that small photovoltaic panels are difficult to identify. Further, to solve the problems of blurred edges in the segmentation results and that adjacent photovoltaic panels can easily be adhered, this work combines an edge detection network and a semantic segmentation network for multi-task learning to extract the boundaries of photovoltaic panels in a refined manner. Comparative experiments conducted on the Duke California Solar Array data set and a self-constructed Shanghai Distributed Photovoltaic Power Station data set show that, compared with SegNet, LinkNet, UNet, and FPN, the proposed method obtained the highest identification accuracy on both data sets, and its F1-scores reached 84.79% and 94.03%, respectively. These results indicate that effectively combining multi-layer features with a gated fusion module and introducing an edge detection network to refine the segmentation improves the accuracy of distributed photovoltaic power station identification.


Sign in / Sign up

Export Citation Format

Share Document