High-resolution urban land-cover classification using a competitive multi-scale object-based approach

2012 ◽  
Vol 4 (2) ◽  
pp. 131-140 ◽  
Author(s):  
Brian A. Johnson
Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3717 ◽  
Author(s):  
Pengbin Zhang ◽  
Yinghai Ke ◽  
Zhenxin Zhang ◽  
Mingli Wang ◽  
Peng Li ◽  
...  

Urban land cover and land use mapping plays an important role in urban planning and management. In this paper, novel multi-scale deep learning models, namely ASPP-Unet and ResASPP-Unet are proposed for urban land cover classification based on very high resolution (VHR) satellite imagery. The proposed ASPP-Unet model consists of a contracting path which extracts the high-level features, and an expansive path, which up-samples the features to create a high-resolution output. The atrous spatial pyramid pooling (ASPP) technique is utilized in the bottom layer in order to incorporate multi-scale deep features into a discriminative feature. The ResASPP-Unet model further improves the architecture by replacing each layer with residual unit. The models were trained and tested based on WorldView-2 (WV2) and WorldView-3 (WV3) imageries over the city of Beijing. Model parameters including layer depth and the number of initial feature maps (IFMs) as well as the input image bands were evaluated in terms of their impact on the model performances. It is shown that the ResASPP-Unet model with 11 layers and 64 IFMs based on 8-band WV2 imagery produced the highest classification accuracy (87.1% for WV2 imagery and 84.0% for WV3 imagery). The ASPP-Unet model with the same parameter setting produced slightly lower accuracy, with overall accuracy of 85.2% for WV2 imagery and 83.2% for WV3 imagery. Overall, the proposed models outperformed the state-of-the-art models, e.g., U-Net, convolutional neural network (CNN) and Support Vector Machine (SVM) model over both WV2 and WV3 images, and yielded robust and efficient urban land cover classification results.


2020 ◽  
Vol 12 (2) ◽  
pp. 311 ◽  
Author(s):  
Chun Liu ◽  
Doudou Zeng ◽  
Hangbin Wu ◽  
Yin Wang ◽  
Shoujun Jia ◽  
...  

Urban land cover classification for high-resolution images is a fundamental yet challenging task in remote sensing image analysis. Recently, deep learning techniques have achieved outstanding performance in high-resolution image classification, especially the methods based on deep convolutional neural networks (DCNNs). However, the traditional CNNs using convolution operations with local receptive fields are not sufficient to model global contextual relations between objects. In addition, multiscale objects and the relatively small sample size in remote sensing have also limited classification accuracy. In this paper, a relation-enhanced multiscale convolutional network (REMSNet) method is proposed to overcome these weaknesses. A dense connectivity pattern and parallel multi-kernel convolution are combined to build a lightweight and varied receptive field sizes model. Then, the spatial relation-enhanced block and the channel relation-enhanced block are introduced into the network. They can adaptively learn global contextual relations between any two positions or feature maps to enhance feature representations. Moreover, we design a parallel multi-kernel deconvolution module and spatial path to further aggregate different scales information. The proposed network is used for urban land cover classification against two datasets: the ISPRS 2D semantic labelling contest of Vaihingen and an area of Shanghai of about 143 km2. The results demonstrate that the proposed method can effectively capture long-range dependencies and improve the accuracy of land cover classification. Our model obtains an overall accuracy (OA) of 90.46% and a mean intersection-over-union (mIoU) of 0.8073 for Vaihingen and an OA of 88.55% and a mIoU of 0.7394 for Shanghai.


2019 ◽  
Vol 11 (18) ◽  
pp. 2128 ◽  
Author(s):  
Mugiraneza ◽  
Nascetti ◽  
Ban

The emergence of high-resolution satellite data, such as WorldView-2, has opened the opportunity for urban land cover mapping at fine resolution. However, it is not straightforward to map detailed urban land cover and to detect urban deprived areas, such as informal settlements, in complex urban environments based merely on high-resolution spectral features. Thus, approaches integrating hierarchical segmentation and rule-based classification strategies can play a crucial role in producing high quality urban land cover maps. This study aims to evaluate the potential of WorldView-2 high-resolution multispectral and panchromatic imagery for detailed urban land cover classification in Kigali, Rwanda, a complex urban area characterized by a subtropical highland climate. A multi-stage object-based classification was performed using support vector machines (SVM) and a rule-based approach to derive 12 land cover classes with the input of WorldView-2 spectral bands, spectral indices, gray level co-occurrence matrix (GLCM) texture measures and a digital terrain model (DTM). In the initial classification, confusion existed among the informal settlements, the high- and low-density built-up areas, as well as between the upland and lowland agriculture. To improve the classification accuracy, a framework based on a geometric ruleset and two newly defined indices (urban density and greenness density indices) were developed. The novel framework resulted in an overall classification accuracy at 85.36% with a kappa coefficient at 0.82. The confusion between high- and low-density built-up areas significantly decreased, while informal settlements were successfully extracted with the producer and user’s accuracies at 77% and 90% respectively. It was revealed that the integration of an object-based SVM classification of WorldView-2 feature sets and DTM with the geometric ruleset and urban density and greenness indices resulted in better class separability, thus higher classification accuracies in complex urban environments.


2010 ◽  
Vol 36 (3) ◽  
pp. 236-247 ◽  
Author(s):  
Xinwu Li ◽  
Eric Pottier ◽  
Huadong Guo ◽  
Laurent Ferro-Famil

Sign in / Sign up

Export Citation Format

Share Document