scholarly journals Mapping Urban Functional Zones by Integrating Very High Spatial Resolution Remote Sensing Imagery and Points of Interest: A Case Study of Xiamen, China

2018 ◽  
Vol 10 (11) ◽  
pp. 1737 ◽  
Author(s):  
Jinchao Song ◽  
Tao Lin ◽  
Xinhu Li ◽  
Alexander V. Prishchepov

Fine-scale, accurate intra-urban functional zones (urban land use) are important for applications that rely on exploring urban dynamic and complexity. However, current methods of mapping functional zones in built-up areas with high spatial resolution remote sensing images are incomplete due to a lack of social attributes. To address this issue, this paper explores a novel approach to mapping urban functional zones by integrating points of interest (POIs) with social properties and very high spatial resolution remote sensing imagery with natural attributes, and classifying urban function as residence zones, transportation zones, convenience shops, shopping centers, factory zones, companies, and public service zones. First, non-built and built-up areas were classified using high spatial resolution remote sensing images. Second, the built-up areas were segmented using an object-based approach by utilizing building rooftop characteristics (reflectance and shapes). At the same time, the functional POIs of the segments were identified to determine the functional attributes of the segmented polygon. Third, the functional values—the mean priority of the functions in a road-based parcel—were calculated by functional segments and segmental weight coefficients. This method was demonstrated on Xiamen Island, China with an overall accuracy of 78.47% and with a kappa coefficient of 74.52%. The proposed approach could be easily applied in other parts of the world where social data and high spatial resolution imagery are available and improve accuracy when automatically mapping urban functional zones using remote sensing imagery. It will also potentially provide large-scale land-use information.

2020 ◽  
Vol 12 (15) ◽  
pp. 2424
Author(s):  
Luis Salgueiro Romero ◽  
Javier Marcello ◽  
Verónica Vilaplana

Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution.


Sign in / Sign up

Export Citation Format

Share Document