scholarly journals Integrating Aerial and Street View Images for Urban Land Use Classification

2018 ◽  
Vol 10 (10) ◽  
pp. 1553 ◽  
Author(s):  
Rui Cao ◽  
Jiasong Zhu ◽  
Wei Tu ◽  
Qingquan Li ◽  
Jinzhou Cao ◽  
...  

Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel level using high-resolution aerial images and ground-level street view images. We use a deep neural network to extract semantic features from sparsely distributed street view images and interpolate them in the spatial domain to match the spatial resolution of the aerial images, which are then fused together through a deep neural network for classifying land use categories. Our methods are tested on a large publicly available aerial and street view images dataset of New York City, and the results show that using aerial images alone can achieve relatively high classification accuracy, the ground-level street view images contain useful information for urban land use classification, and fusing street image features with aerial images can improve classification accuracy. Moreover, we present experimental studies to show that street view images add more values when the resolutions of the aerial images are lower, and we also present case studies to illustrate how street view images provide useful auxiliary information to aerial images to boost performances.

2019 ◽  
Vol 8 (1) ◽  
pp. 28 ◽  
Author(s):  
Quanlong Feng ◽  
Dehai Zhu ◽  
Jianyu Yang ◽  
Baoguo Li

Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on "Squeeze-and-Excitation Networks"). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy.


2018 ◽  
Vol 216 ◽  
pp. 57-70 ◽  
Author(s):  
Ce Zhang ◽  
Isabel Sargent ◽  
Xin Pan ◽  
Huapeng Li ◽  
Andy Gardiner ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document