scholarly journals A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images

Author(s):  
Chenxiao Zhang ◽  
Peng Yue ◽  
Deodato Tapete ◽  
Boyi Shangguan ◽  
Mi Wang ◽  
...  
2019 ◽  
Vol 8 (4) ◽  
pp. 189 ◽  
Author(s):  
Chi Zhang ◽  
Shiqing Wei ◽  
Shunping Ji ◽  
Meng Lu

The study investigates land use/cover classification and change detection of urban areas from very high resolution (VHR) remote sensing images using deep learning-based methods. Firstly, we introduce a fully Atrous convolutional neural network (FACNN) to learn the land cover classification. In the FACNN an encoder, consisting of full Atrous convolution layers, is proposed for extracting scale robust features from VHR images. Then, a pixel-based change map is produced based on the classification map of current images and an outdated land cover geographical information system (GIS) map. Both polygon-based and object-based change detection accuracy is investigated, where a polygon is the unit of the GIS map and an object consists of those adjacent changed pixels on the pixel-based change map. The test data covers a rapidly developing city of Wuhan (8000 km2), China, consisting of 0.5 m ground resolution aerial images acquired in 2014, and 1 m ground resolution Beijing-2 satellite images in 2017, and their land cover GIS maps. Testing results showed that our FACNN greatly exceeded several recent convolutional neural networks in land cover classification. Second, the object-based change detection could achieve much better results than a pixel-based method, and provide accurate change maps to facilitate manual urban land cover updating.


2021 ◽  
Vol 42 (21) ◽  
pp. 8318-8344
Author(s):  
Xianwei Lv ◽  
Zhenfeng Shao ◽  
Dongping Ming ◽  
Chunyuan Diao ◽  
Keqi Zhou ◽  
...  

Author(s):  
B. Liu ◽  
S. Du ◽  
X. Zhang

Abstract. Land cover map is widely used in urban planning, environmental monitoring and monitoring of the changing world. This paper proposes a framework with convolutional neural network (CNN), object-based voting and conditional random field (CRF) for land cover classification. Both very-high-resolution (VHR) remote sensing images and digital surface model (DSM) are inputs of this CNN model. To solve the “salt and pepper” effect caused by pixel-based classification, an object-based voting classification is performed. And to capture accurate boundary of ground objects, a CRF optimization using spectral information, DSM and deep features extracted through CNN is applied. Area one of Vaihingen datasets is used for experiment. The experimental results show that method proposed in this paper achieve an overall accuracy of 95.57%, which demonstrate the effectiveness of proposed method.


2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document