scholarly journals Flood Extent Mapping: An Integrated Method Using Deep Learning and Region Growing Using UAV Optical Data

Author(s):  
Leila Hashemi-Beni ◽  
Asmamaw A. Gebrehiwot
2021 ◽  
Vol 13 (7) ◽  
pp. 1342
Author(s):  
Luca Pulvirenti ◽  
Giuseppe Squicciarino ◽  
Elisabetta Fiori ◽  
Luca Ferraris ◽  
Silvia Puca

An automated tool for pre-operational mapping of floods and inland waters using Sentinel-1 data is presented. The acronym AUTOWADE (AUTOmatic Water Areas DEtector) is used to denote it. The tool provides the end user (Italian Department of Civil Protection) with a continuous, near real-time (NRT) monitoring of the extent of inland water surfaces (floodwater and permanent water). It implements the following operations: downloading of Sentinel-1 products; preprocessing of the products and storage of the resulting geocoded and calibrated data; generation of the intermediate products, such as the exclusion mask; application of a floodwater/permanent water mapping algorithm; generation of the output layer, i.e., a map of floodwater/permanent water; delivery of the output layer to the end user. The open floodwater/permanent water mapping algorithm implemented in AUTOWADE is based on a new approach, denoted as buffer-from-edge (BFE), which combines different techniques, such as clustering, edge filtering, automatic thresholding and region growing. AUTOWADE copes also with the typical presence of gaps in the flood maps caused by undetected flooded vegetation. An attempt to partially fill these gaps by analyzing vegetated areas adjacent to open water is performed by another algorithm implemented in the tool, based on the fuzzy logic. The BFE approach has been validated offline using maps produced by the Copernicus Emergency Management Service. Validation has given good results with a F1-score larger than 0.87 and a kappa coefficient larger than 0.80. The algorithm to detect flooded vegetation has been visually compared with optical data and aerial photos; its capability to fill some of the gaps present in flood maps has been confirmed.


Landslides ◽  
2021 ◽  
Author(s):  
Sansar Raj Meena ◽  
Omid Ghorbanzadeh ◽  
Cees J. van Westen ◽  
Thimmaiah Gudiyangada Nachappa ◽  
Thomas Blaschke ◽  
...  

AbstractRainfall-induced landslide inventories can be compiled using remote sensing and topographical data, gathered using either traditional or semi-automatic supervised methods. In this study, we used the PlanetScope imagery and deep learning convolution neural networks (CNNs) to map the 2018 rainfall-induced landslides in the Kodagu district of Karnataka state in the Western Ghats of India. We used a fourfold cross-validation (CV) to select the training and testing data to remove any random results of the model. Topographic slope data was used as auxiliary information to increase the performance of the model. The resulting landslide inventory map, created using the slope data with the spectral information, reduces the false positives, which helps to distinguish the landslide areas from other similar features such as barren lands and riverbeds. However, while including the slope data did not increase the true positives, the overall accuracy was higher compared to using only spectral information to train the model. The mean accuracies of correctly classified landslide values were 65.5% when using only optical data, which increased to 78% with the use of slope data. The methodology presented in this research can be applied in other landslide-prone regions, and the results can be used to support hazard mitigation in landslide-prone regions.


2019 ◽  
Vol 11 (2) ◽  
pp. 196 ◽  
Author(s):  
Omid Ghorbanzadeh ◽  
Thomas Blaschke ◽  
Khalil Gholamnia ◽  
Sansar Meena ◽  
Dirk Tiede ◽  
...  

There is a growing demand for detailed and accurate landslide maps and inventories around the globe, but particularly in hazard-prone regions such as the Himalayas. Most standard mapping methods require expert knowledge, supervision and fieldwork. In this study, we use optical data from the Rapid Eye satellite and topographic factors to analyze the potential of machine learning methods, i.e., artificial neural network (ANN), support vector machines (SVM) and random forest (RF), and different deep-learning convolution neural networks (CNNs) for landslide detection. We use two training zones and one test zone to independently evaluate the performance of different methods in the highly landslide-prone Rasuwa district in Nepal. Twenty different maps are created using ANN, SVM and RF and different CNN instantiations and are compared against the results of extensive fieldwork through a mean intersection-over-union (mIOU) and other common metrics. This accuracy assessment yields the best result of 78.26% mIOU for a small window size CNN, which uses spectral information only. The additional information from a 5 m digital elevation model helps to discriminate between human settlements and landslides but does not improve the overall classification accuracy. CNNs do not automatically outperform ANN, SVM and RF, although this is sometimes claimed. Rather, the performance of CNNs strongly depends on their design, i.e., layer depth, input window sizes and training strategies. Here, we conclude that the CNN method is still in its infancy as most researchers will either use predefined parameters in solutions like Google TensorFlow or will apply different settings in a trial-and-error manner. Nevertheless, deep-learning can improve landslide mapping in the future if the effects of the different designs are better understood, enough training samples exist, and the effects of augmentation strategies to artificially increase the number of existing samples are better understood.


2018 ◽  
Vol 7 (10) ◽  
pp. 389 ◽  
Author(s):  
Wei He ◽  
Naoto Yokoya

In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.


2011 ◽  
Vol 15 (11) ◽  
pp. 3475-3494 ◽  
Author(s):  
D. O'Grady ◽  
M. Leblanc ◽  
D. Gillieson

Abstract. Envisat ASAR Global Monitoring Mode (GM) data are used to produce maps of the extent of the flooding in Pakistan which are made available to the rapid response effort within 24 h of acquisition. The high temporal frequency and independence of the data from cloud-free skies makes GM data a viable tool for mapping flood waters during those periods where optical satellite data are unavailable, which may be crucial to rapid response disaster planning, where thousands of lives are affected. Image differencing techniques are used, with pre-flood baseline image backscatter values being deducted from target values to eliminate regions with a permanent flood-like radar response due to volume scattering and attenuation, and to highlight the low response caused by specular reflection by open flood water. The effect of local incidence angle on the received signal is mitigated by ensuring that the deducted image is acquired from the same orbit track as the target image. Poor separability of the water class with land in areas beyond the river channels is tackled using a region-growing algorithm which seeks threshold-conformance from seed pixels at the center of the river channels. The resultant mapped extents are tested against MODIS SWIR data where available, with encouraging results.


2021 ◽  
Vol 7 (2) ◽  
pp. 22
Author(s):  
Erena Siyoum Biratu ◽  
Friedhelm Schwenker ◽  
Taye Girma Debelee ◽  
Samuel Rahimeto Kebede ◽  
Worku Gachena Negera ◽  
...  

A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach’s performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.


2021 ◽  
Vol 13 (4) ◽  
pp. 592
Author(s):  
Yanling Han ◽  
Yekun Liu ◽  
Zhonghua Hong ◽  
Yun Zhang ◽  
Shuhu Yang ◽  
...  

Sea ice is one of the typical causes of marine disasters. Sea ice image classification is an important component of sea ice detection. Optical data contain rich spectral information, but they do not allow one to easily distinguish between ground objects with a similar spectrum and foreign objects with the same spectrum. Synthetic aperture radar (SAR) data contain rich texture information, but the data usually have a single source. The limitation of single-source data is that they do not allow for further improvements of the accuracy of remote sensing sea ice classification. In this paper, we propose a method for sea ice image classification based on deep learning and heterogeneous data fusion. Utilizing the advantages of convolutional neural networks (CNNs) in terms of depth feature extraction, we designed a deep learning network structure for SAR and optical images and achieve sea ice image classification through feature extraction and a feature-level fusion of heterogeneous data. For the SAR images, the improved spatial pyramid pooling (SPP) network was used and texture information on sea ice at different scales was extracted by depth. For the optical data, multi-level feature information on sea ice such as spatial and spectral information on different types of sea ice was extracted through a path aggregation network (PANet), which enabled low-level features to be fully utilized due to the gradual feature extraction of the convolution neural network. In order to verify the effectiveness of the method, two sets of heterogeneous sentinel satellite data were used for sea ice classification in the Hudson Bay area. The experimental results show that compared with the typical image classification methods and other heterogeneous data fusion methods, the method proposed in this paper fully integrates multi-scale and multi-level texture and spectral information from heterogeneous data and achieves a better classification effect (96.61%, 95.69%).


Sign in / Sign up

Export Citation Format

Share Document