scholarly journals Three-Dimensional Urban Land Cover Classification by Prior-Level Fusion of LiDAR Point Cloud and Optical Imagery

2021 ◽  
Vol 13 (23) ◽  
pp. 4928
Author(s):  
Yanming Chen ◽  
Xiaoqiang Liu ◽  
Yijia Xiao ◽  
Qiqi Zhao ◽  
Sida Wan

The heterogeneity of urban landscape in the vertical direction should not be neglected in urban ecology research, which requires urban land cover product transformation from two-dimensions to three-dimensions using light detection and ranging system (LiDAR) point clouds. Previous studies have demonstrated that the performance of two-dimensional land cover classification can be improved by fusing optical imagery and LiDAR data using several strategies. However, few studies have focused on the fusion of LiDAR point clouds and optical imagery for three-dimensional land cover classification, especially using a deep learning framework. In this study, we proposed a novel prior-level fusion strategy and compared it with the no-fusion strategy (baseline) and three other commonly used fusion strategies (point-level, feature-level, and decision-level). The proposed prior-level fusion strategy uses two-dimensional land cover derived from optical imagery as the prior knowledge for three-dimensional classification. Then, a LiDAR point cloud is linked to the prior information using the nearest neighbor method and classified by a deep neural network. Our proposed prior-fusion strategy has higher overall accuracy (82.47%) on data from the International Society for Photogrammetry and Remote Sensing, compared with the baseline (74.62%), point-level (79.86%), feature-level (76.22%), and decision-level (81.12%). The improved accuracy reflects two features: (1) fusing optical imagery to LiDAR point clouds improves the performance of three-dimensional urban land cover classification, and (2) the proposed prior-level strategy directly uses semantic information provided by the two-dimensional land cover classification rather than the original spectral information of optical imagery. Furthermore, the proposed prior-level fusion strategy provides a series that fills the gap between two- and three-dimensional land cover classification.

2016 ◽  
Vol 3 (2) ◽  
pp. 127
Author(s):  
Jati Pratomo ◽  
Triyoga Widiastomo

The usage of Unmanned Aerial Vehicle (UAV) has grown rapidly in various fields, such as urban planning, search and rescue, and surveillance. Capturing images from UAV has many advantages compared with satellite imagery. For instance, higher spatial resolution and less impact from atmospheric variations can be obtained. However, there are difficulties in classifying urban features, due to the complexity of the urban land covers. The usage of Maximum Likelihood Classification (MLC) has limitations since it is based on the assumption of the normal distribution of pixel values, where, in fact, urban features are not normally distributed. There are advantages in using the Markov Random Field (MRF) for urban land cover classification as it assumes that neighboring pixels have a higher probability to be classified in the same class rather than a different class. This research aimed to determine the impact of the smoothness (λ) and the updating temperature (Tupd) on the accuracy result (κ) in MRF. We used a UAV VHIR sized 587 square meters, with six-centimetre resolution, taken in Bogor Regency, Indonesia. The result showed that the kappa value (κ) increases proportionally with the smoothness (λ) until it reaches the maximum (κ), then the value drops. The usage of higher (Tupd) has resulted in better (κ) although it also led to a higher Standard Deviations (SD). Using the most optimal parameter, MRF resulted in slightly higher (κ) compared with MLC.


2020 ◽  
Vol 12 (2) ◽  
pp. 311 ◽  
Author(s):  
Chun Liu ◽  
Doudou Zeng ◽  
Hangbin Wu ◽  
Yin Wang ◽  
Shoujun Jia ◽  
...  

Urban land cover classification for high-resolution images is a fundamental yet challenging task in remote sensing image analysis. Recently, deep learning techniques have achieved outstanding performance in high-resolution image classification, especially the methods based on deep convolutional neural networks (DCNNs). However, the traditional CNNs using convolution operations with local receptive fields are not sufficient to model global contextual relations between objects. In addition, multiscale objects and the relatively small sample size in remote sensing have also limited classification accuracy. In this paper, a relation-enhanced multiscale convolutional network (REMSNet) method is proposed to overcome these weaknesses. A dense connectivity pattern and parallel multi-kernel convolution are combined to build a lightweight and varied receptive field sizes model. Then, the spatial relation-enhanced block and the channel relation-enhanced block are introduced into the network. They can adaptively learn global contextual relations between any two positions or feature maps to enhance feature representations. Moreover, we design a parallel multi-kernel deconvolution module and spatial path to further aggregate different scales information. The proposed network is used for urban land cover classification against two datasets: the ISPRS 2D semantic labelling contest of Vaihingen and an area of Shanghai of about 143 km2. The results demonstrate that the proposed method can effectively capture long-range dependencies and improve the accuracy of land cover classification. Our model obtains an overall accuracy (OA) of 90.46% and a mean intersection-over-union (mIoU) of 0.8073 for Vaihingen and an OA of 88.55% and a mIoU of 0.7394 for Shanghai.


Sign in / Sign up

Export Citation Format

Share Document