scholarly journals Multi-Granularity Neural Network Encoding Method for Land Cover and Land Use Image Classification

Author(s):  
Guoyin Wang ◽  
Musabe Jean Bosco ◽  
Hategekimana Yves

Deep learning classification is the state-of-the-art of machine learning approach. Earlier work proves that the deep convolutional neural network has successfully and brilliantly in different applications such as images or video data. Recognizing and clarifying the remote sensing aspect of the earth's surface and exploit land cover and land use (LCLU). First, this article summarized the remote sensing emerging application and challenges for deep learning methods. Second, we propose four approaches to learn efficient and effective CNNs to transfer image representation on the ImageNet dataset to recognize LCLU datasets. We use VGG16, Inception-ResNet-V2, Inception-V3, and DenseNet201 models to extract features from the EACC dataset. We use pre-trained CNNs on ImageNet to extract features. For feature selection we proposed principal component analysis (PCA) to improve accuracy and speed up the model. We train our model by multi-layer perceptron (MLP) as a classifier. Lastly, we apply the multi-granularity encoding ensemble model. We achieve an overall accuracy of 92.3% for the nine-class classification problem. This work will help remote sensing scientists understand deep learning tools and apply them in large-scale remote sensing challenges

2019 ◽  
Vol 11 (12) ◽  
pp. 1435 ◽  
Author(s):  
Shiran Song ◽  
Jianhua Liu ◽  
Heng Pu ◽  
Yuan Liu ◽  
Jingyan Luo

The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high spatial resolution remote sensing images. In this paper, we not only pay attention to the visual effect of fused images, but also focus on the subsequent application effectiveness of information extraction and feature recognition based on fused images. Based on the WorldView-3 images of Tongzhou District of Beijing, we apply the fusion results to conduct the experiments of object recognition of typical urban features based on deep learning. Furthermore, we perform a quantitative analysis for the existing pixel-based mainstream fusion methods of IHS (Intensity-Hue Saturation), PCS (Principal Component Substitution), GS (Gram Schmidt), ELS (Ehlers), HPF (High-Pass Filtering), and HCS (Hyper spherical Color Space) from the perspectives of spectrum, geometric features, and recognition accuracy. The results show that there are apparent differences in visual effect and quantitative index among different fusion methods, and the PCS fusion method has the most satisfying comprehensive effectiveness in the object recognition of land cover (features) based on deep learning.


2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


2021 ◽  
Author(s):  
Melanie Brandmeier ◽  
Eya Cherif

<p>Degradation of large forest areas such as the Brazilian Amazon due to logging and fires can increase the human footprint way beyond deforestation. Monitoring and quantifying such changes on a large scale has been addressed by several research groups (e.g. Souza et al. 2013) by making use of freely available remote sensing data such as the Landsat archive. However, fully automatic large-scale land cover/land use mapping is still one of the great challenges in remote sensing. One problem is the availability of reliable “ground truth” labels for training supervised learning algorithms. For the Amazon area, several landcover maps with 22 classes are available from the MapBiomas project that were derived by semi-automatic classification and verified by extensive fieldwork (Project MapBiomas). These labels cannot be considered real ground-truth as they were derived from Landsat data themselves but can still be used for weakly supervised training of deep-learning models that have a potential to improve predictions on higher resolution data nowadays available. The term weakly supervised learning was originally coined by (Zhou 2017) and refers to the attempt of constructing predictive models from incomplete, inexact and/or inaccurate labels as is often the case in remote sensing. To this end, we investigate advanced deep-learning strategies on Sentinel-1 timeseries and Sentinel-2 optical data to improve large-scale automatic mapping and monitoring of landcover changes in the Amazon area. Sentinel-1 data has the advantage to be resistant to cloud cover that often hinders optical remote sensing in the tropics.</p><p>We propose new architectures that are adapted to the particularities of remote sensing data (S1 timeseries and multispectral S2 data) and compare the performance to state-of-the-art models.  Results using only spectral data were very promising with overall test accuracies of 77.9% for Unet and 74.7% for a DeepLab implementation with ResNet50 backbone and F1 measures of 43.2% and 44.2% respectively.  On the other hand, preliminary results for new architectures leveraging the multi-temporal aspect of  SAR data have improved the quality of mapping, particularly for agricultural classes. For instance, our new designed network AtrousDeepForestM2 has a similar quantitative performances as DeepLab  (F1 of 58.1% vs 62.1%), however it produces better qualitative land cover maps.</p><p>To make our approach scalable and feasible for others, we integrate the trained models in a geoprocessing tool in ArcGIS that can also be deployed in a cloud environment and offers a variety of post-processing options to the user.</p><p>Souza, J., Carlos M., et al. (2013). "Ten-Year Landsat Classification of Deforestation and Forest Degradation in the Brazilian Amazon." Remote Sensing 5(11): 5493-5513.   </p><p>Zhou, Z.-H. (2017). "A brief introduction to weakly supervised learning." National Science Review 5(1): 44-53.</p><p>"Project MapBiomas - Collection  4.1 of Brazilian Land Cover & Use Map Series, accessed on January 2020 through the link: https://mapbiomas.org/colecoes-mapbiomas?cama_set_language=en"</p>


Author(s):  
Anil B. Gavade ◽  
Vijay S. Rajpurohit

Over the last few decades, multiple advances have been done for the classification of vegetation area through land cover, and land use. However, classification problem is one of the most complicated and contradicting problems that has received considerable attention. Therefore, to tackle this problem, this paper proposes a new Firefly-Harmony search based Deep Belief Neural Network method (FHS-DBN) for the classification of land cover, and land use. The segmentation process is done using Bayesian Fuzzy Clustering,and the feature matrix is developed. The feature matrix is given to the proposed FHS-DBN method that distinguishes the land coverfrom the land use in the multispectral satellite images, for analyzing the vegetation area. The proposed FHS-DBN method is designedby training the DBN using the FHS algorithm, which is developed by the combination of Firefly Algorithm (FA) and Harmony Search (HS) algorithm. The performance of the FHS-DBN model is evaluated using three metrics, such as Accuracy, True Positive Rate (TPR), and False Positive Rate (FPR). From the experimental analysis, it is concludedthat the proposed FHS-DBN model achieves ahigh classification accuracy of 0.9381, 0.9488, 0.9497, and 0.9477 usingIndian Pine, Salinas scene, Pavia Centre and university, and Pavia University scene dataset.


2019 ◽  
Vol 8 (1) ◽  
pp. 28 ◽  
Author(s):  
Quanlong Feng ◽  
Dehai Zhu ◽  
Jianyu Yang ◽  
Baoguo Li

Accurate urban land-use mapping is a challenging task in the remote-sensing field. With the availability of diverse remote sensors, synthetic use and integration of multisource data provides an opportunity for improving urban land-use classification accuracy. Neural networks for Deep Learning have achieved very promising results in computer-vision tasks, such as image classification and object detection. However, the problem of designing an effective deep-learning model for the fusion of multisource remote-sensing data still remains. To tackle this issue, this paper proposes a modified two-branch convolutional neural network for the adaptive fusion of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. Specifically, the proposed model consists of a HSI branch and a LiDAR branch, sharing the same network structure to reduce the time cost of network design. A residual block is utilized in each branch to extract hierarchical, parallel, and multiscale features. An adaptive-feature fusion module is proposed to integrate HSI and LiDAR features in a more reasonable and natural way (based on "Squeeze-and-Excitation Networks"). Experiments indicate that the proposed two-branch network shows good performance, with an overall accuracy of almost 92%. Compared with single-source data, the introduction of multisource data improves accuracy by at least 8%. The adaptive fusion model can also increase classification accuracy by more than 3% when compared with the feature-stacking method (simple concatenation). The results demonstrate that the proposed network can effectively extract and fuse features for a better urban land-use mapping accuracy.


Sign in / Sign up

Export Citation Format

Share Document