Training sample refining method using an adaptive neighbor to improve the classification performance of very high-spatial resolution remote sensing images

2019 ◽  
Vol 13 (03) ◽  
pp. 1 ◽  
Author(s):  
ZhiYong Lv ◽  
GuangFei Li ◽  
Jón Atli Benediktsson ◽  
Zhou Zhang ◽  
JiXing Yan
2018 ◽  
Vol 10 (11) ◽  
pp. 1737 ◽  
Author(s):  
Jinchao Song ◽  
Tao Lin ◽  
Xinhu Li ◽  
Alexander V. Prishchepov

Fine-scale, accurate intra-urban functional zones (urban land use) are important for applications that rely on exploring urban dynamic and complexity. However, current methods of mapping functional zones in built-up areas with high spatial resolution remote sensing images are incomplete due to a lack of social attributes. To address this issue, this paper explores a novel approach to mapping urban functional zones by integrating points of interest (POIs) with social properties and very high spatial resolution remote sensing imagery with natural attributes, and classifying urban function as residence zones, transportation zones, convenience shops, shopping centers, factory zones, companies, and public service zones. First, non-built and built-up areas were classified using high spatial resolution remote sensing images. Second, the built-up areas were segmented using an object-based approach by utilizing building rooftop characteristics (reflectance and shapes). At the same time, the functional POIs of the segments were identified to determine the functional attributes of the segmented polygon. Third, the functional values—the mean priority of the functions in a road-based parcel—were calculated by functional segments and segmental weight coefficients. This method was demonstrated on Xiamen Island, China with an overall accuracy of 78.47% and with a kappa coefficient of 74.52%. The proposed approach could be easily applied in other parts of the world where social data and high spatial resolution imagery are available and improve accuracy when automatically mapping urban functional zones using remote sensing imagery. It will also potentially provide large-scale land-use information.


2020 ◽  
Vol 12 (15) ◽  
pp. 2424
Author(s):  
Luis Salgueiro Romero ◽  
Javier Marcello ◽  
Verónica Vilaplana

Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution.


Author(s):  
Linmei Wu ◽  
Li Shen ◽  
Zhipeng Li

A kernel-based method for very high spatial resolution remote sensing image classification is proposed in this article. The new kernel method is based on spectral-spatial information and structure information as well, which is acquired from topic model, Latent Dirichlet Allocation model. The final kernel function is defined as <i>K</i>&thinsp;=&thinsp;<i>u<sub>1</sub></i><i>K</i><sup>spec</sup>&thinsp;+&thinsp;<i>u<sub>2</sub></i><i>K</i><sup>spat</sup>&thinsp;+&thinsp;<i>u<sub>3</sub></i><i>K</i><sup>stru</sup>, in which <i>K</i><sup>spec</sup>, <i>K</i><sup>spat</sup>, <i>K</i><sup>stru</sup> are radial basis function (RBF) and <i>u<sub>1</sub></i>&thinsp;+&thinsp;<i>u<sub>2</sub></i>&thinsp;+&thinsp;<i>u<sub>3</sub></i>&thinsp;=&thinsp;1. In the experiment, comparison with three other kernel methods, including the spectral-based, the spectral- and spatial-based and the spectral- and structure-based method, is provided for a panchromatic QuickBird image of a suburban area with a size of 900&thinsp;×&thinsp;900 pixels and spatial resolution of 0.6&thinsp;m. The result shows that the overall accuracy of the spectral- and structure-based kernel method is 80&thinsp;%, which is higher than the spectral-based kernel method, as well as the spectral- and spatial-based which accuracy respectively is 67&thinsp;% and 74&thinsp;%. What's more, the accuracy of the proposed composite kernel method that jointly uses the spectral, spatial, and structure information is highest among the four methods which is increased to 83&thinsp;%. On the other hand, the result of the experiment also verifies the validity of the expression of structure information about the remote sensing image.


Sign in / Sign up

Export Citation Format

Share Document