scholarly journals Deep learning with uncertainty quantification for slum mapping using satellite imagery

Author(s):  
Thomas Fisher ◽  
Harry Gibson ◽  
Gholamreza Salimi-Khorshidi ◽  
Abdelaali Hassaine ◽  
Yutong Cai ◽  
...  

Over a billion people live in slums, with poor sanitation, education, property rights and working conditions having direct impact on current residents and future generations. A key problem in relation to slums is slum mapping. Without delineations of where all slum settlements are, informed decisions cannot be made by policymakers in order to benefit the most in need. Satellite images have been used in combination with machine learning models to try and fill the gap in data availability of slum locations. Deep learning has been used on RGB images with some success but since labeled satellite images of slums are relatively low quality and the physical/visual manifestation of slums significantly varies within and across countries, it is important to quantify the uncertainty of predictions for reliable application in downstream tasks. Our solution is to train Monte Carlo dropout U-Net models on multispectral 13-band Sentinel-2 images from which we can calculate pixelwise epistemic (model) and aleatoric (data) uncertainty in our predictions. We trained our model on labelled images of Mumbai and verified our epistemic and aleatoric uncertainty quantification approach using altered models trained on modified datasets. We also used SHAP values to investigate how the different features contribute towards the model’s predictions and this showed that certain short-wave infrared and red-edge image bands are powerful features for determining the locations of slums within images. Having created our model with uncertainty quantification, in the future it can be applied to downstream tasks and decision-makers will know where predictions have been made with low uncertainty, giving them greater confidence in its deployment.

2021 ◽  
Author(s):  
Florin Tatui ◽  
Georgiana Anghelin ◽  
Sorin Constantin

<p>Shoreline, as the interface between the upper shoreface and the beach-dune system, is sensitive to all changes from both the underwater and sub-aerial parts of the beach at a wide range of temporal scales (seconds to decades), making it a good indicator for coastal health. While more traditional techniques of shoreline monitoring present some shortcomings (low temporal resolution for photointerpretation, reduced spatial extension for video-based techniques, high costs for DGPS in-situ data acquisition), freely available satellite images can provide information for large areas (tens/hundreds of km) at very good temporal scales (days).</p><p>We employed a shoreline detection workflow for the dynamic environment of the Danube Delta coast (Black Sea). We focused on an index-based approach using the Automated Water Extraction Index (AWEI). A fully automated procedure was deployed for data processing and the waterline was estimated at sub-pixel level with an adapted image thresholding technique. For validation purposes, 5 Sentinel-2 and 5 Landsat based results were compared with both in-situ (D)GPS measurements and manually digitized shoreline positions from very high-resolution satellite images (Pleiades – 0.5 m and Spot 7 – 1.5 m). The overall accuracy of the methodology, expressed as mean absolute error, was found to be of approximately 7.5 m for Sentinel-2 and 4.7 m for Landsat data, respectively.</p><p>More than 200 Landsat (5 and 8) and Sentinel-2 images were processed and the corresponding satellite-derived shorelines between 1990 and 2020 were analysed for the whole Romanian Danube Delta coast (130 km). This high number of shorelines allowed us the discrimination of different patterns of coastline dynamic and behaviour which could not have been possible using usual surveying techniques: the extent of accumulation areas induced by the 2005-2006 historical river floods, the impact of different high-energy storms and the subsequent beach recovery after these events, the alongshore movement of erosional processes in accordance with the dominant direction of longshore sediment transport, multi-annual differences in both erosional and accumulation trends. Moreover, a very important result of our analysis is the zonation of Danube Delta coast based on multi-annual trends of shoreline dynamics at finer alongshore spatial resolution than before. This has significant implications for future studies dealing with different scenarios of Danube Delta response to projected sea level rise and increased storminess.</p><p>The presented approach and resulting products offer optimal combination of data availability, accuracy and frequency necessary to meet the monitoring and management needs of the increasing number of stakeholders involved in the coastal zone protection activities.</p>


2021 ◽  
Vol 13 (1) ◽  
pp. 157
Author(s):  
Jun Li ◽  
Zhaocong Wu ◽  
Zhongwen Hu ◽  
Zilong Li ◽  
Yisong Wang ◽  
...  

Thin clouds seriously affect the availability of optical remote sensing images, especially in visible bands. Short-wave infrared (SWIR) bands are less influenced by thin clouds, but usually have lower spatial resolution than visible (Vis) bands in high spatial resolution remote sensing images (e.g., in Sentinel-2A/B, CBERS04, ZY-1 02D and HJ-1B satellites). Most cloud removal methods do not take advantage of the spectral information available in SWIR bands, which are less affected by clouds, to restore the background information tainted by thin clouds in Vis bands. In this paper, we propose CR-MSS, a novel deep learning-based thin cloud removal method that takes the SWIR and vegetation red edge (VRE) bands as inputs in addition to visible/near infrared (Vis/NIR) bands, in order to improve cloud removal in Sentinel-2 visible bands. Contrary to some traditional and deep learning-based cloud removal methods, which use manually designed rescaling algorithm to handle bands at different resolutions, CR-MSS uses convolutional layers to automatically process bands at different resolution. CR-MSS has two input/output branches that are designed to process Vis/NIR and VRE/SWIR, respectively. Firstly, Vis/NIR cloudy bands are down-sampled by a convolutional layer to low spatial resolution features, which are then concatenated with the corresponding features extracted from VRE/SWIR bands. Secondly, the concatenated features are put into a fusion tunnel to down-sample and fuse the spectral information from Vis/NIR and VRE/SWIR bands. Third, a decomposition tunnel is designed to up-sample and decompose the fused features. Finally, a transpose convolutional layer is used to up-sample the feature maps to the resolution of input Vis/NIR bands. CR-MSS was trained on 28 real Sentinel-2A image pairs over the globe, and tested separately on eight real cloud image pairs and eight simulated cloud image pairs. The average SSIM values (Structural Similarity Index Measurement) for CR-MSS results on Vis/NIR bands over all testing images were 0.69, 0.71, 0.77, and 0.81, respectively, which was on average 1.74% higher than the best baseline method. The visual results on real Sentinel-2 images demonstrate that CR-MSS can produce more realistic cloud and cloud shadow removal results than baseline methods.


Author(s):  
A. Htitiou ◽  
A. Boudhar ◽  
Y. Lebrini ◽  
T. Benabdelouahab

Abstract. Remote sensing offers spatially explicit and temporally continuous observational data of various land surface parameters such as vegetation index, land surface temperature, soil moisture, leaf area index, and evapotranspiration, which can be widely leveraged for various applications at different scales and contexts. One of the main applications is agricultural monitoring, where a smart system based on precision agriculture requires a set of satellite images with a high resolution, both in time and space to capture the phenological stages and fine spatial details, especially in landscapes with various spatial heterogeneity and temporal variation. These requirements sometimes cannot be provided by a single sensor due to the trade-off required between spatial and temporal resolutions and/or the influence of cloud cover. The data availability of new generation multispectral sensors of Landsat-8 (L8) and Sentinel-2 (S2) satellites offers unprecedented options for such applications. Given this, the current study aims to display how the synergistic use of these optical sensors can efficiently support such an application. Herein, this study proposes a deep learning spatiotemporal data fusion method to fill the need for predicting a dense time series of vegetation index with fine spatial resolution. The results show that the developed method creates more accurate fused NDVI time-series data that were able to derive phenological stages and characteristics in single-crop fields, while keeps more spatial details in such a heterogeneous landscape.


Author(s):  
Jakob Sigurdsson ◽  
Magnus O. Ulfarsson ◽  
Johannes R. Sveinsson

2021 ◽  
Author(s):  
◽  
Rostyslav-Mykola Tsenov

In recent years, a lot of remote sensing problems benefited from the improvements made in deep learning. In particular, deep learning semantic segmentation algorithms have provided improved frameworks for the automated production of land-use and land-cover (LULC) map generation. Automation of LULC map production can significantly increase its production frequency, which provides a great benefit to areas such as natural resource management, wildlife habitat protection, urban expansion, damage delineation, etc. In this thesis, many different convolutional neural networks (CNN) were examined in combination with various state-of-the-art semantic segmentation methods and extensions to improve the accuracy of predicted LULC maps. Most of the experiments were carried out using Landsat 5/7 and Landsat 8 satellite images. Additionally, unsupervised domain adaption (UDA) architectures were explored to transfer knowledge extracted from a labelled Landsat 8 dataset to unlabelled Sentinel-2 satellite images. The performance of various CNN and extension combinations were carefully assessed, where VGGNet with an output stride of 4, and modified U-Net architecture provided the best results. Additionally, an expanded analysis of the generated LULC maps for various sensors was provided. The contributions of this thesis are accurate automated LULC maps predictions that achieved ~92.4% of accuracy using deep neural networks; production of the model trained on the larger area, which is six times the size from the previous work, for both 8-bit Landsat 5/7, and 16-bit Landsat 8 sensors; and generation of the network architecture to produce LULC maps for the unlabelled 12-bit Sentinel-2 data with the knowledge extracted from the labelled Landsat 8 data.


2021 ◽  
Author(s):  
Yassir Benhammou ◽  
Domingo Alcaraz-Segura ◽  
Emilio Guirado ◽  
Rohaifa Khaldi ◽  
Boujemâa Achchab ◽  
...  

ABSTRACTLand-Use and Land-Cover (LULC) mapping is relevant for many applications, from Earth system and climate modelling to territorial and urban planning. Global LULC products are continuously developing as remote sensing data and methods grow. However, there is still low consistency among LULC products due to low accuracy for some regions and LULC types. Here, we introduce Sentinel2GlobalLULC, a Sentinel-2 RGB image dataset, built from the consensus of 15 global LULC maps available in Google Earth Engine. Sentinel2GlobalLULC v1.1 contains 195572 RGB images organized into 29 global LULC mapping classes. Each image is a tile that has 224 × 224 pixels at 10 × 10 m spatial resolution and was built as a cloud-free composite from all Sentinel-2 images acquired between June 2015 and October 2020. Metadata includes a unique LULC type annotation per image, together with level of consensus, reverse geo-referencing, and global human modification index. Sentinel2GlobalLULC is optimized for the state-of-the-art Deep Learning models to provide a new gate towards building precise and robust global or regional LULC maps.


2021 ◽  
Vol 9 (1) ◽  
pp. 40
Author(s):  
Lampros Tasiopoulos ◽  
Marianthi Stefouli ◽  
Yorghos Voutos ◽  
Phivos Mylonas ◽  
Eleni Charou

Climate change could exacerbate floods on agricultural plains by increasing the frequency of extreme and adverse meteorological events. Flood extent maps could be a valuable source of information for agricultural land decision makers, risk management and emergency planning. We propose a method that combines various types of data and processing techniques in order to achieve accurate flood extent maps. The application aims to find the percentage of agricultural land that is covered by the floods through an automatic map estimation methodology based on the freely available Sentinel-2 (S2) satellite images and machine learning techniques.


2019 ◽  
Vol 11 (18) ◽  
pp. 2184 ◽  
Author(s):  
Baik ◽  
Son ◽  
Kim

On 15 November 2017, liquefaction phenomena were observed around the epicenter after a 5.4 magnitude earthquake occurred in Pohang in southeast Korea. In this study, we attempted to detect areas of sudden water content increase by using SAR (synthetic aperture radar) and optical satellite images. We analyzed coherence changes using Sentinel-1 SAR coseismic image pairs and analyzed NDWI (normalized difference water index) changes using Landsat 8 and Sentinel-2 optical satellite images from before and after the earthquake. Coherence analysis showed no liquefaction-induced surface changes. The NDWI time series analysis models using Landsat 8 and Sentinel-2 optical images confirmed liquefaction phenomena close to the epicenter but could not detect liquefaction phenomena far from the epicenter. We proposed and evaluated the TDLI (temporal difference liquefaction index), which uses only one SWIR (short-wave infrared) band at 2200 nm, which is sensitive to soil moisture content. The Sentinel-2 TDLI was most consistent with field observations where sand blow from liquefaction was confirmed. We found that Sentinel-2, with its relatively shorter revisit period compared to that of Landsat 8 (5 days vs. 16 days), was more effective for detecting traces of short-lived liquefaction phenomena on the surface. The Sentinel-2 TDLI could help facilitate rapid investigations and responses to liquefaction damage.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


Sign in / Sign up

Export Citation Format

Share Document