Deep Extraction of Cropland Parcels from Very High-Resolution Remotely Sensed Imagery

Author(s):  
Liegang Xia ◽  
Jiancheng Luo ◽  
Yingwei Sun ◽  
Haiping Yang
2010 ◽  
pp. 519-527 ◽  
Author(s):  
Silvia Di Paolo ◽  
Diego Giuliarelli ◽  
Barbara Ferrari ◽  
Anna Barbati ◽  
Piermaria Corona

2020 ◽  
Vol 12 (5) ◽  
pp. 862
Author(s):  
Sicong Liu ◽  
Qing Hu ◽  
Xiaohua Tong ◽  
Junshi Xia ◽  
Qian Du ◽  
...  

In this article, a novel feature selection-based multi-scale superpixel-based guided filter (FS-MSGF) method for classification of very-high-resolution (VHR) remotely sensed imagery is proposed. Improved from the original guided filter (GF) algorithm used in the classification, the guidance image in the proposed approach is constructed based on the superpixel-level segmentation. By taking into account the object boundaries and the inner-homogeneity, the superpixel-level guidance image leads to the geometrical information of land-cover objects in VHR images being better depicted. High-dimensional multi-scale guided filter (MSGF) features are then generated, where the multi-scale information of those land-cover classes is better modelled. In addition, for improving the computational efficiency without the loss of accuracy, a subset of those MSGF features is then automatically selected by using an unsupervised feature selection method, which contains the most distinctive information in all constructed MSGF features. Quantitative and qualitative classification results obtained on two QuickBird remotely sensed imagery datasets covering the Zurich urban scene are provided and analyzed, which demonstrate that the proposed methods outperform the state-of-the-art reference techniques in terms of higher classification accuracies and higher computational efficiency.


2019 ◽  
Vol 11 (24) ◽  
pp. 2916 ◽  
Author(s):  
Erzhu Li ◽  
Alim Samat ◽  
Wei Liu ◽  
Cong Lin ◽  
Xuyu Bai

Detailed land use and land cover (LULC) information is one of the important information for land use surveys and applications related to the earth sciences. Therefore, LULC classification using very-high resolution remotely sensed imagery has been a hot issue in the remote sensing community. However, it remains a challenge to successfully extract LULC information from very-high resolution remotely sensed imagery, due to the difficulties in describing the individual characteristics of various LULC categories using single level features. The traditional pixel-wise or spectral-spatial based methods pay more attention to low-level feature representations of target LULC categories. In addition, deep convolutional neural networks offer great potential to extract high-level features to describe objects and have been successfully applied to scene understanding or classification. However, existing studies has paid little attention to constructing multi-level feature representations to better understand each category. In this paper, a multi-level feature representation framework is first designed to extract more robust feature representations for the complex LULC classification task using very-high resolution remotely sensed imagery. To this end, spectral reflection and morphological and morphological attribute profiles are used to describe the pixel-level and neighborhood-level information. Furthermore, a novel object-based convolutional neural networks (CNN) is proposed to extract scene-level information. The object-based CNN method combines advantages of object-based method and CNN method and can perform multi-scale analysis at the scene level. Then, the random forest method is employed to carry out the final classification using the multi-level features. The proposed method was validated on three challenging remotely sensed imageries including a hyperspectral image and two multispectral images with very-high spatial resolution, and achieved excellent classification performances.


Sign in / Sign up

Export Citation Format

Share Document