Deep learning based water feature mapping using sentinel- 2 satellite image

2021 ◽  
Author(s):  
Kuldeep Chaurasia ◽  
Mayank Dixit ◽  
Ayush Goyal ◽  
Uthej K. ◽  
Adhithyaram S. ◽  
...  
2021 ◽  
Vol 13 (2) ◽  
pp. 289
Author(s):  
Misganu Debella-Gilo ◽  
Arnt Kristian Gjertsen

The size and location of agricultural fields that are in active use and the type of use during the growing season are among the vital information that is needed for the careful planning and forecasting of agricultural production at national and regional scales. In areas where such data are not readily available, an independent seasonal monitoring method is needed. Remote sensing is a widely used tool to map land use types, although there are some limitations that can partly be circumvented by using, among others, multiple observations, careful feature selection and appropriate analysis methods. Here, we used Sentinel-2 satellite image time series (SITS) over the land area of Norway to map three agricultural land use classes: cereal crops, fodder crops (grass) and unused areas. The Multilayer Perceptron (MLP) and two variants of the Convolutional Neural Network (CNN), are implemented on SITS data of four different temporal resolutions. These enabled us to compare twelve model-dataset combinations to identify the model-dataset combination that results in the most accurate predictions. The CNN is implemented in the spectral and temporal dimensions instead of the conventional spatial dimension. Rather than using existing deep learning architectures, an autotuning procedure is implemented so that the model hyperparameters are empirically optimized during the training. The results obtained on held-out test data show that up to 94% overall accuracy and 90% Cohen’s Kappa can be obtained when the 2D CNN is applied on the SITS data with a temporal resolution of 7 days. This is closely followed by the 1D CNN on the same dataset. However, the latter performs better than the former in predicting data outside the training set. It is further observed that cereal is predicted with the highest accuracy, followed by grass. Predicting the unused areas has been found to be difficult as there is no distinct surface condition that is common for all unused areas.


2021 ◽  
Author(s):  
Thorsten Seehaus ◽  
Kamal Nambiar Gopikrishnan ◽  
Veniamin Morgenshtern ◽  
Philipp Hochreuther ◽  
Matthias Braun

<div>Screening clouds, cloud shadows, and snow is a critical pre-processing step that needs to be performed before any meaningful analysis can be done on satellite image data. The state of the art 'F-Mask' algorithm, which is based on multiple pixel-level threshold tests, segments the image into clear land, cloud, cloud shadow, snow, and water classes. However, we observe that the results of this algorithm are not very accurate in polar and tundra regions. The unavailability of labeled Sentinel-2 training datasets with these classes makes the traditional supervised machine learning techniques difficult to implement. Experiments with large, noisy training data on standard deep learning classification tasks like CIFAR-10 and ImageNet have shown neural networks learn clean labels faster than noisy labels. </div><div>We present a multi-level self-learning approach that trains a model to perform semantic segmentation on Sentinel-2 L1C images. We use a large dataset with labels annotated using the F-mask algorithm for the training, and a small human-labeled dataset for validation. The validation dataset contains numerous examples where the F-mask classification would have given incorrect labels. At the first step, a deep neural network with a modified U-Net architecture is trained using a dataset automatically labeled with the F-mask algorithm. The performance on the validation dataset is used to select the best model from the step, which would then be used to generate more training labels from previously unseen data. In each of the subsequent steps, a new model is trained using the labels generated using the model from the previous step. The amount of data used for training increases with each step and the application of techniques like data augmentation and dropout improves the generalization of the trained model. We show that the final model from our approach can outperform its teacher, i.e. F-Mask algorithm. </div>


2021 ◽  
Vol 13 (5) ◽  
pp. 992
Author(s):  
Dan López-Puigdollers ◽  
Gonzalo Mateo-García ◽  
Luis Gómez-Chova

The systematic monitoring of the Earth using optical satellites is limited by the presence of clouds. Accurately detecting these clouds is necessary to exploit satellite image archives in remote sensing applications. Despite many developments, cloud detection remains an unsolved problem with room for improvement, especially over bright surfaces and thin clouds. Recently, advances in cloud masking using deep learning have shown significant boosts in cloud detection accuracy. However, these works are validated in heterogeneous manners, and the comparison with operational threshold-based schemes is not consistent among many of them. In this work, we systematically compare deep learning models trained on Landsat-8 images on different Landsat-8 and Sentinel-2 publicly available datasets. Overall, we show that deep learning models exhibit a high detection accuracy when trained and tested on independent images from the same Landsat-8 dataset (intra-dataset validation), outperforming operational algorithms. However, the performance of deep learning models is similar to operational threshold-based ones when they are tested on different datasets of Landsat-8 images (inter-dataset validation) or datasets from a different sensor with similar radiometric characteristics such as Sentinel-2 (cross-sensor validation). The results suggest that (i) the development of cloud detection methods for new satellites can be based on deep learning models trained on data from similar sensors and (ii) there is a strong dependence of deep learning models on the dataset used for training and testing, which highlights the necessity of standardized datasets and procedures for benchmarking cloud detection models in the future.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.


2021 ◽  
Vol 54 (1) ◽  
pp. 182-208
Author(s):  
Sani M. Isa ◽  
Suharjito ◽  
Gede Putera Kusuma ◽  
Tjeng Wawan Cenggoro
Keyword(s):  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.


2021 ◽  
Vol 13 (13) ◽  
pp. 7044
Author(s):  
Dawei Wen ◽  
Song Ma ◽  
Anlu Zhang ◽  
Xinli Ke

Assessment of ecosystem services supply, demand, and budgets can help to achieve sustainable urban development. The Guangdong-Hong Kong-Macao Greater Bay Area, as one of the most developed megacities in China, sets up a goal of high-quality development while fostering ecosystem services. Therefore, assessing the ecosystem services in this study area is very important to guide further development. However, the spatial pattern of ecosystem services, especially at local scales, is not well understood. Using the available 2017 land cover product, Sentinel-1 SAR and Sentinel-2 optical images, a deep learning land cover mapping framework integrating deep change vector analysis and the ResUnet model was proposed. Based on the produced 10 m land cover map for the year 2020, recent spatial patterns of the ecosystem services at different scales (i.e., the GBA, 11 cities, urban–rural gradient, and pixel) were analyzed. The results showed that: (1) Forest was the primary land cover in Guangzhou, Huizhou, Shenzhen, Zhuhai, Jiangmen, Zhaoqing, and Hong Kong, and an impervious surface was the main land cover in the other four cities. (2) Although ecosystem services in the GBA were sufficient to meet their demand, there was undersupply for all the three general services in Macao and for the provision services in Zhongshan, Dongguan, Shenzhen, and Foshan. (3) Along the urban–rural gradient in the GBA, supply and demand capacity showed an increasing and decreasing trend, respectively. As for the city-level analysis, Huizhou and Zhuhai showed a fluctuation pattern while Jiangmen, Zhaoqing, and Hong Kong presented a decreasing pattern along the gradient. (4) Inclusion of neighborhood landscape led to increased demand scores in a small proportion of impervious areas and oversupply for a very large percent of bare land.


Sign in / Sign up

Export Citation Format

Share Document