scholarly journals Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets

2021 ◽  
Vol 13 (11) ◽  
pp. 2220
Author(s):  
Yanbing Bai ◽  
Wenqi Wu ◽  
Zhengxin Yang ◽  
Jinze Yu ◽  
Bo Zhao ◽  
...  

Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.

2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3982
Author(s):  
Giacomo Lazzeri ◽  
William Frodella ◽  
Guglielmo Rossi ◽  
Sandro Moretti

Wildfires have affected global forests and the Mediterranean area with increasing recurrency and intensity in the last years, with climate change resulting in reduced precipitations and higher temperatures. To assess the impact of wildfires on the environment, burned area mapping has become progressively more relevant. Initially carried out via field sketches, the advent of satellite remote sensing opened new possibilities, reducing the cost uncertainty and safety of the previous techniques. In the present study an experimental methodology was adopted to test the potential of advanced remote sensing techniques such as multispectral Sentinel-2, PRISMA hyperspectral satellite, and UAV (unmanned aerial vehicle) remotely-sensed data for the multitemporal mapping of burned areas by soil–vegetation recovery analysis in two test sites in Portugal and Italy. In case study one, innovative multiplatform data classification was performed with the correlation between Sentinel-2 RBR (relativized burn ratio) fire severity classes and the scene hyperspectral signature, performed with a pixel-by-pixel comparison leading to a converging classification. In the adopted methodology, RBR burned area analysis and vegetation recovery was tested for accordance with biophysical vegetation parameters (LAI, fCover, and fAPAR). In case study two, a UAV-sensed NDVI index was adopted for high-resolution mapping data collection. At a large scale, the Sentinel-2 RBR index proved to be efficient for burned area analysis, from both fire severity and vegetation recovery phenomena perspectives. Despite the elapsed time between the event and the acquisition, PRISMA hyperspectral converging classification based on Sentinel-2 was able to detect and discriminate different spectral signatures corresponding to different fire severity classes. At a slope scale, the UAV platform proved to be an effective tool for mapping and characterizing the burned area, giving clear advantage with respect to filed GPS mapping. Results highlighted that UAV platforms, if equipped with a hyperspectral sensor and used in a synergistic approach with PRISMA, would create a useful tool for satellite acquired data scene classification, allowing for the acquisition of a ground truth.


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


2020 ◽  
Vol 12 (12) ◽  
pp. 2013
Author(s):  
Konstantinos Topouzelis ◽  
Dimitris Papageorgiou ◽  
Alexandros Karagaitanakis ◽  
Apostolos Papakonstantinou ◽  
Manuel Arias Ballesteros

Remote sensing is a promising tool for the detection of floating marine plastics offering extensive area coverage and frequent observations. While floating plastics are reported in high concentrations in many places around the globe, no referencing dataset exists either for understanding the spectral behavior of floating plastics in a real environment, or for calibrating remote sensing algorithms and validating their results. To tackle this problem, we initiated the Plastic Litter Projects (PLPs), where large artificial plastic targets were constructed and deployed on the sea surface. The first such experiment was realised in the summer of 2018 (PLP2018) with three large targets of 10 × 10 m. Hereafter, we present the second Plastic Litter Project (PLP2019), where smaller 5 × 5 m targets were constructed to better simulate near-real conditions and examine the limitations of the detection with Sentinel-2 images. The smaller targets and the multiple acquisition dates allowed for several observations, with the targets being connected in a modular way to create different configurations of various sizes, material composition and coverage. A spectral signature for the PET (polyethylene terephthalate) targets was produced through modifying the U.S. Geological Survey PET signature using an inverse spectral unmixing calculation, and the resulting signature was used to perform a matched filtering processing on the Sentinel-2 images. The results provide evidence that under suitable conditions, pixels with a PET abundance fraction of at least as low as 25% can be successfully detected, while pinpointing several factors that significantly impact the detection capabilities. To the best of our knowledge, the 2018 and 2019 Plastic Litter Projects are to date the only large-scale field experiments on the remote detection of floating marine litter in a near-real environment and can be used as a reference for more extensive validation/calibration campaigns.


2020 ◽  
pp. 35
Author(s):  
M. Campos-Taberner ◽  
F.J. García-Haro ◽  
B. Martínez ◽  
M.A. Gilabert

<p class="p1">The use of deep learning techniques for remote sensing applications has recently increased. These algorithms have proven to be successful in estimation of parameters and classification of images. However, little effort has been made to make them understandable, leading to their implementation as “black boxes”. This work aims to evaluate the performance and clarify the operation of a deep learning algorithm, based on a bi-directional recurrent network of long short-term memory (2-BiLSTM). The land use classification in the Valencian Community based on Sentinel-2 image time series in the framework of the common agricultural policy (CAP) is used as an example. It is verified that the accuracy of the deep learning techniques is superior (98.6 % overall success) to that other algorithms such as decision trees (DT), k-nearest neighbors (k-NN), neural networks (NN), support vector machines (SVM) and random forests (RF). The performance of the classifier has been studied as a function of time and of the predictors used. It is concluded that, in the study area, the most relevant information used by the network in the classification are the images corresponding to summer and the spectral and spatial information derived from the red and near infrared bands. These results open the door to new studies in the field of the explainable deep learning in remote sensing applications.</p>


2021 ◽  
Vol 13 (22) ◽  
pp. 4674
Author(s):  
Yuqing Qin ◽  
Jie Su ◽  
Mingfeng Wang

The formation and distribution of melt ponds have an important influence on the Arctic climate. Therefore, it is necessary to obtain more accurate information on melt ponds on Arctic sea ice by remote sensing. The present large-scale melt pond products, especially the melt pond fraction (MPF), still require verification, and using very high resolution optical satellite remote sensing data is a good way to verify the large-scale retrieval of MPF products. Unlike most MPF algorithms using very high resolution data, the LinearPolar algorithm using Sentinel-2 data considers the albedo of melt ponds unfixed. In this paper, by selecting the best band combination, we applied this algorithm to Landsat 8 (L8) data. Moreover, Sentinel-2 data, as well as support vector machine (SVM) and iterative self-organizing data analysis technique (ISODATA) algorithms, are used as the comparison and verification data. The results show that the recognition accuracy of the LinearPolar algorithm for melt ponds is higher than that of previous algorithms. The overall accuracy and kappa coefficient results achieved by using the LinearPolar algorithm with L8 and Sentinel-2A (S2), the SVM algorithm, and the ISODATA algorithm are 95.38% and 0.88, 94.73% and 0.86, and 92.40%and 0.80, respectively, which are much higher than those of principal component analysis (PCA) and Markus algorithms. The mean MPF (10.0%) obtained from 80 cases from L8 data based on the LinearPolar algorithm is much closer to Sentinel-2 (10.9%) than the Markus (5.0%) and PCA algorithms (4.2%), with a mean MPF difference of only 0.9%, and the correlation coefficients of the two MPFs are as high as 0.95. The overall relative error of the LinearPolar algorithm is 53.5% and 46.4% lower than that of the Markus and PCA algorithms, respectively, and the root mean square error (RMSE) is 30.9% and 27.4% lower than that of the Markus and PCA algorithms, respectively. In the cases without obvious melt ponds, the relative error is reduced more than that of those with obvious melt ponds because the LinearPolar algorithm can identify 100% of dark melt ponds and relatively small melt ponds, and the latter contributes more to the reduction in the relative error of MPF retrieval. With a wider range and longer time series, the MPF from Landsat data are more efficient than those from Sentinel-2 for verifying large-scale MPF products or obtaining long-term monitoring of a fixed area.


Drones ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 21 ◽  
Author(s):  
Francisco Rodríguez-Puerta ◽  
Rafael Alonso Ponce ◽  
Fernando Pérez-Rodríguez ◽  
Beatriz Águeda ◽  
Saray Martín-García ◽  
...  

Controlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing fuel load, we in turn reduce the intensity and severity of the fire. The use of Unmanned Aerial Vehicles (UAV)-acquired data combined with other passive and active remote sensing data has the greatest performance to planning Wildland-Urban Interface (WUI) fuelbreak through machine learning algorithms. Nine remote sensing data sources (active and passive) and four supervised classification algorithms (Random Forest, Linear and Radial Support Vector Machine and Artificial Neural Networks) were tested to classify five fuel-area types. We used very high-density Light Detection and Ranging (LiDAR) data acquired by UAV (154 returns·m−2 and ortho-mosaic of 5-cm pixel), multispectral data from the satellites Pleiades-1B and Sentinel-2, and low-density LiDAR data acquired by Airborne Laser Scanning (ALS) (0.5 returns·m−2, ortho-mosaic of 25 cm pixels). Through the Variable Selection Using Random Forest (VSURF) procedure, a pre-selection of final variables was carried out to train the model. The four algorithms were compared, and it was concluded that the differences among them in overall accuracy (OA) on training datasets were negligible. Although the highest accuracy in the training step was obtained in SVML (OA=94.46%) and in testing in ANN (OA=91.91%), Random Forest was considered to be the most reliable algorithm, since it produced more consistent predictions due to the smaller differences between training and testing performance. Using a combination of Sentinel-2 and the two LiDAR data (UAV and ALS), Random Forest obtained an OA of 90.66% in training and of 91.80% in testing datasets. The differences in accuracy between the data sources used are much greater than between algorithms. LiDAR growth metrics calculated using point clouds in different dates and multispectral information from different seasons of the year are the most important variables in the classification. Our results support the essential role of UAVs in fuelbreak planning and management and thus, in the prevention of forest fires.


2020 ◽  
Vol 12 (4) ◽  
pp. 716 ◽  
Author(s):  
Yelong Zhao ◽  
Qian Shen ◽  
Qian Wang ◽  
Fan Yang ◽  
Shenglei Wang ◽  
...  

As polluted water bodies are often small in area and widely distributed, performing artificial field screening is difficult; however, remote-sensing-based screening has the advantages of being rapid, large-scale, and dynamic. Polluted water bodies often show anomalous water colours, such as black, grey, and red. Therefore, the large-scale recognition of suspected polluted water bodies through high-resolution remote-sensing images and water colour can improve the screening efficiency and narrow the screening scope. However, few studies have been conducted on such kinds of water bodies. The hue angle of a water body is a parameter used to describe colour in the International Commission on Illumination (CIE) colour space. Based on the measured data, the water body with a hue angle greater than 230.958° is defined as a water colour anomaly, which is recognised based on the Sentinel-2 image through the threshold set in this study. The results showed that the hue angle of the water body was extracted from the Sentinel-2 image, and the accuracy of the hue angle calculated by the in situ remote-sensing reflectance Rrs (λ) was evaluated, where the root mean square error (RMSE) and mean relative error (MRE) were 4.397° and 1.744%, respectively, proving that this method is feasible. The hue angle was calculated for a water colour anomaly and a general water body in Qiqihar. The water body was regarded as a water colour anomaly when the hue angle was >230.958° and as a general water body when the hue angle was ≤230.958°. High-quality Sentinel-2 images of Qiqihar taken from May 2016 to August 2019 were chosen, and the position of the water body remained unchanged; there was no error or omission, and the hue angle of the water colour anomaly changed obviously, indicating that this method had good stability. Additionally, the method proposed is only suitable for optical deep water, not for optical shallow water. When this method was applied to Xiong’an New Area, the results showed good recognition accuracy, demonstrating good universality of this method. In this study, taking Qiqihar as an example, a surface survey experiment was conducted from October 14 to 15, 2018, and the measured data of six general and four anomalous water sample points were obtained, including water quality terms such as Rrs (λ), transparency, water colour, water temperature, and turbidity.


2019 ◽  
Vol 7 (9) ◽  
pp. 316 ◽  
Author(s):  
Francesco Immordino ◽  
Mattia Barsanti ◽  
Elena Candigliota ◽  
Silvia Cocito ◽  
Ivana Delbono ◽  
...  

Sustainable and ecosystem-based marine spatial planning is a priority of Pacific Island countries basing their economy on marine resources. The urgency of management coral reef systems and associated coastal environments, threatened by the effects of climate change, require a detailed habitat mapping of the present status and a future monitoring of changes over time. Here, we present a remote sensing study using free available Sentinel-2 imagery for mapping at large scale the most sensible and high value habitats (corals, seagrasses, mangroves) of Palau Republic (Micronesia, Pacific Ocean), carried out without any sea truth validation. Remote sensing ‘supervised’ and ‘unsupervised’ classification methods applied to 2017 Sentinel-2 imagery with 10 m resolution together with comparisons with free ancillary data on web platform and available scientific literature were used to map mangrove, coral, and seagrass communities in the Palau Archipelago. This paper addresses the challenge of multispectral benthic mapping estimation using commercial software for preprocessing steps (ERDAS ATCOR) and for benthic classification (ENVI) on the base of satellite image analysis. The accuracy of the methods was tested comparing results with reference NOAA (National Oceanic and Atmospheric Administration, Silver Spring, MD, USA) habitat maps achieved through Ikonos and Quickbird imagery interpretation and sea-truth validations. Results showed how the proposed approach allowed an overall good classification of marine habitats, namely a good concordance of mangroves cover around Palau Archipelago with previous literature and a good identification of coastal habitats in two sites (barrier reef and coastal reef) with an accuracy of 39.8–56.8%, suitable for survey and monitoring of most sensible habitats in tropical remote islands.


Sign in / Sign up

Export Citation Format

Share Document