scholarly journals Meta-FSEO: A Meta-Learning Fast Adaptation with Self-Supervised Embedding Optimization for Few-Shot Remote Sensing Scene Classification

2021 ◽  
Vol 13 (14) ◽  
pp. 2776
Author(s):  
Yong Li ◽  
Zhenfeng Shao ◽  
Xiao Huang ◽  
Bowen Cai ◽  
Song Peng

The performance of deep learning is heavily influenced by the size of the learning samples, whose labeling process is time consuming and laborious. Deep learning algorithms typically assume that the training and prediction data are independent and uniformly distributed, which is rarely the case given the attributes and properties of different data sources. In remote sensing images, representations of urban land surfaces can vary across regions and by season, demanding rapid generalization of these surfaces in remote sensing data. In this study, we propose Meta-FSEO, a novel model for improving the performance of few-shot remote sensing scene classification in varying urban scenes. The proposed Meta-FSEO model deploys self-supervised embedding optimization for adaptive generalization in new tasks such as classifying features in new urban regions that have never been encountered during the training phase, thus balancing the requirements for feature classification tasks between multiple images collected at different times and places. We also created a loss function by weighting the contrast losses and cross-entropy losses. The proposed Meta-FSEO demonstrates a great generalization capability in remote sensing scene classification among different cities. In a five-way one-shot classification experiment with the Sentinel-1/2 Multi-Spectral (SEN12MS) dataset, the accuracy reached 63.08%. In a five-way five-shot experiment on the same dataset, the accuracy reached 74.29%. These results indicated that the proposed Meta-FSEO model outperformed both the transfer learning-based algorithm and two popular meta-learning-based methods, i.e., MAML and Meta-SGD.

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1566
Author(s):  
Pei Zhang ◽  
Ying Li ◽  
Dong Wang ◽  
Jiyue Wang

While growing instruments generate more and more airborne or satellite images, the bottleneck in remote sensing (RS) scene classification has shifted from data limits toward a lack of ground truth samples. There are still many challenges when we are facing unknown environments, especially those with insufficient training data. Few-shot classification offers a different picture under the umbrella of meta-learning: digging rich knowledge from a few data are possible. In this work, we propose a method named RS-SSKD for few-shot RS scene classification from a perspective of generating powerful representation for the downstream meta-learner. Firstly, we propose a novel two-branch network that takes three pairs of original-transformed images as inputs and incorporates Class Activation Maps (CAMs) to drive the network mining, the most relevant category-specific region. This strategy ensures that the network generates discriminative embeddings. Secondly, we set a round of self-knowledge distillation to prevent overfitting and boost the performance. Our experiments show that the proposed method surpasses current state-of-the-art approaches on two challenging RS scene datasets: NWPU-RESISC45 and RSD46-WHU. Finally, we conduct various ablation experiments to investigate the effect of each component of the proposed method and analyze the training time of state-of-the-art methods and ours.


2021 ◽  
Vol 13 (9) ◽  
pp. 1715
Author(s):  
Foyez Ahmed Prodhan ◽  
Jiahua Zhang ◽  
Fengmei Yao ◽  
Lamei Shi ◽  
Til Prasad Pangali Sharma ◽  
...  

Drought, a climate-related disaster impacting a variety of sectors, poses challenges for millions of people in South Asia. Accurate and complete drought information with a proper monitoring system is very important in revealing the complex nature of drought and its associated factors. In this regard, deep learning is a very promising approach for delineating the non-linear characteristics of drought factors. Therefore, this study aims to monitor drought by employing a deep learning approach with remote sensing data over South Asia from 2001–2016. We considered the precipitation, vegetation, and soil factors for the deep forwarded neural network (DFNN) as model input parameters. The study evaluated agricultural drought using the soil moisture deficit index (SMDI) as a response variable during three crop phenology stages. For a better comparison of deep learning model performance, we adopted two machine learning models, distributed random forest (DRF) and gradient boosting machine (GBM). Results show that the DFNN model outperformed the other two models for SMDI prediction. Furthermore, the results indicated that DFNN captured the drought pattern with high spatial variability across three penology stages. Additionally, the DFNN model showed good stability with its cross-validated data in the training phase, and the estimated SMDI had high correlation coefficient R2 ranges from 0.57~0.90, 0.52~0.94, and 0.49~0.82 during the start of the season (SOS), length of the season (LOS), and end of the season (EOS) respectively. The comparison between inter-annual variability of estimated SMDI and in-situ SPEI (standardized precipitation evapotranspiration index) showed that the estimated SMDI was almost similar to in-situ SPEI. The DFNN model provides comprehensive drought information by producing a consistent spatial distribution of SMDI which establishes the applicability of the DFNN model for drought monitoring.


Author(s):  
Xu Tang ◽  
Weiquan Lin ◽  
Chao Liu ◽  
Xiao Han ◽  
Wenjing Wang ◽  
...  

2020 ◽  
Vol 12 (7) ◽  
pp. 1092
Author(s):  
David Browne ◽  
Michael Giering ◽  
Steven Prestwich

Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.


Author(s):  
M. Papadomanolaki ◽  
M. Vakalopoulou ◽  
S. Zagoruyko ◽  
K. Karantzalos

In this paper we evaluated deep-learning frameworks based on Convolutional Neural Networks for the accurate classification of multispectral remote sensing data. Certain state-of-the-art models have been tested on the publicly available SAT-4 and SAT-6 high resolution satellite multispectral datasets. In particular, the performed benchmark included the <i>AlexNet</i>, <i>AlexNet-small</i> and <i>VGG</i> models which had been trained and applied to both datasets exploiting all the available spectral information. Deep Belief Networks, Autoencoders and other semi-supervised frameworks have been, also, compared. The high level features that were calculated from the tested models managed to classify the different land cover classes with significantly high accuracy rates <i>i.e.</i>, above 99.9%. The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data.


2021 ◽  
Author(s):  
Federico Figari Tomenotti

Change detection is a well-known topic of remote sensing. The goal is to track and monitor the evolution of changes affecting the Earth surface over time. The recently increased availability in remote sensing data for Earth observation and in computational power has raised the interest in this field of research. In particular, the keywords “multitemporal” and “heterogeneous” play prominent roles. The former refers to the availability and the comparison of two or more satellite images of the same place on the ground, in order to find changes and track the evolution of the observed surface, maybe with different time sensitivities. The latter refers to the capability of performing change detection with images coming from different sources, corresponding to different sensors, wavelengths, polarizations, acquisition geometries, etc. This thesis addresses the challenging topic of multitemporal change detection with heterogeneous remote sensing images. It proposes a novel approach, taking inspiration from recent developments in the literature. The proposed method is based on deep learning - involving autoencoders of convolutional neural networks - and represents an exapmple of unsupervised change detection. A major novelty of the work consists in including a prior information model, used to make the method unsupervised, within a well-established algorithm such as the canonical correlation analysis, and in combining these with a deep learning framework to give rise to an image translation method able to compare heterogeneous images regardless of their highly different domains. The theoretical analysis is supported by experimental results, comparing the proposed methodology to the state of the art of this discipline. Two different datasets were used for the experiments, and the results obtained on both of them show the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document