scholarly journals Learning Sentinel-2 Spectral Dynamics for Long-Run Predictions using Residual Neural Networks

Author(s):  
Joaquim Estopinan ◽  
Guillaume Tochon ◽  
Lucas Drumetz
Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


Author(s):  
Christina Corbane ◽  
Vasileios Syrris ◽  
Filip Sabo ◽  
Panagiotis Politis ◽  
Michele Melchiorri ◽  
...  

Abstract Spatially consistent and up-to-date maps of human settlements are crucial for addressing policies related to urbanization and sustainability, especially in the era of an increasingly urbanized world. The availability of open and free Sentinel-2 data of the Copernicus Earth Observation program offers a new opportunity for wall-to-wall mapping of human settlements at a global scale. This paper presents a deep-learning-based framework for a fully automated extraction of built-up areas at a spatial resolution of 10 m from a global composite of Sentinel-2 imagery. A multi-neuro modeling methodology building on a simple Convolution Neural Networks architecture for pixel-wise image classification of built-up areas is developed. The core features of the proposed model are the image patch of size 5 × 5 pixels adequate for describing built-up areas from Sentinel-2 imagery and the lightweight topology with a total number of 1,448,578 trainable parameters and 4 2D convolutional layers and 2 flattened layers. The deployment of the model on the global Sentinel-2 image composite provides the most detailed and complete map reporting about built-up areas for reference year 2018. The validation of the results with an independent reference dataset of building footprints covering 277 sites across the world establishes the reliability of the built-up layer produced by the proposed framework and the model robustness. The results of this study contribute to cutting-edge research in the field of automated built-up areas mapping from remote sensing data and establish a new reference layer for the analysis of the spatial distribution of human settlements across the rural–urban continuum.


2020 ◽  
Vol 12 (10) ◽  
pp. 1620 ◽  
Author(s):  
Weichun Zhang ◽  
Hongbin Liu ◽  
Wei Wu ◽  
Linqing Zhan ◽  
Jing Wei

Rice is an important agricultural crop in the Southwest Hilly Area, China, but there has been a lack of efficient and accurate monitoring methods in the region. Recently, convolutional neural networks (CNNs) have obtained considerable achievements in the remote sensing community. However, it has not been widely used in mapping a rice paddy, and most studies lack the comparison of classification effectiveness and efficiency between CNNs and other classic machine learning models and their transferability. This study aims to develop various machine learning classification models with remote sensing data for comparing the local accuracy of classifiers and evaluating the transferability of pretrained classifiers. Therefore, two types of experiments were designed: local classification experiments and model transferability experiments. These experiments were conducted using cloud-free Sentinel-2 multi-temporal data in Banan District and Zhongxian County, typical hilly areas of Southwestern China. A pure pixel extraction algorithm was designed based on land-use vector data and a Google Earth Online image. Four convolutional neural network (CNN) algorithms (one-dimensional (Conv-1D), two-dimensional (Conv-2D) and three-dimensional (Conv-3D_1 and Conv-3D_2) convolutional neural networks) were developed and compared with four widely used classifiers (random forest (RF), extreme gradient boosting (XGBoost), support vector machine (SVM) and multilayer perceptron (MLP)). Recall, precision, overall accuracy (OA) and F1 score were applied to evaluate classification accuracy. The results showed that Conv-2D performed best in local classification experiments with OA of 93.14% and F1 score of 0.8552 in Banan District, OA of 92.53% and F1 score of 0.8399 in Zhongxian County. CNN-based models except Conv-1D provided more desirable performance than non-CNN classifiers. Besides, among the non-CNN classifiers, XGBoost received the best result with OA of 89.73% and F1 score of 0.7742 in Banan District, SVM received the best result with OA of 88.57% and F1 score of 0.7538 in Zhongxian County. In model transferability experiments, almost all CNN classifiers had low transferability. RF and XGBoost models have achieved acceptable F1 scores for transfer (RF = 0.6673 and 0.6469, XGBoost = 0.7171 and 0.6709, respectively).


2020 ◽  
Vol 143 ◽  
pp. 02015
Author(s):  
Li Zherui ◽  
Cai Huiwen

Sea ice classification is one of the important tasks of sea ice monitoring. Accurate extraction of sea ice types is of great significance on sea ice conditions assessment, smooth navigation and safty marine operations. Sentinel-2 is an optical satellite launched by the European Space Agency. High spatial resolution and wide range imaging provide powerful support for sea ice monitoring. However, traditional supervised classification method is difficult to achieve fine results for small sample features. In order to solve the problem, this paper proposed a sea ice extraction method based on deep learning and it was applied to Liaodong Bay in Bohai Sea, China. The convolutional neural network was used to extract and classify the feature of the image from Sentinel-2. The results showed that the overall accuracy of the algorithm was 85.79% which presented a significant improvement compared with the tranditional algorithms, such as minimum distance method, maximum likelihood method, Mahalanobis distance method, and support vector machine method. The method proposed in this paper, which combines convolutional neural networks and high-resolution multispectral data, provides a new idea for remote sensing monitoring of sea ice.


Sign in / Sign up

Export Citation Format

Share Document