scholarly journals Mapping landslides from EO data using deep-learning methods

Author(s):  
Nikhil Prakash ◽  
Andrea Manconi ◽  
Simon Loew

<p>Landslide hazard has always been a significant source of economic losses and fatalities in the mountainous regions. Knowledge of the spatial extent of the past and present landslide activity, compiled in the form of a landslide inventory map, is essential for effective risk management. High-resolution data acquired by Earth observation (EO) satellites are often used to map landslides by identifying morphological expressions that can be associated with past and/or recent deformation. This is a slow and difficult process as it requires extensive manual efforts. As a result, such maps are not readily available for all the landslide hazard affected regions. Fully automated methods are required to exploit the exponentially increasing amount of EO data available for landslide hazard assessments. In this context, conventional methods like pixel-based and object-based machine learning strategies have been studied extensively in the last decade. Recent advances in convolutional neural network (CNN), a type of deep-learning method, has outperformed other conventional learning methods in similar image interpretation tasks. In this work, we present a deep-learning based method for semantic segmentation of landslides from EO images. We present the results from a study area in the south of Portland in Oregon, USA. The landslide inventory for training and ground truth was extracted from the Statewide Landslide Information Database of Oregon (SLIDO). We were able to achieve a probability of detection (POD) greater than 0.70. This method can also be extended to be used for rapid mapping of landslides after a major triggering event (like earthquake or extreme metrological event) has occurred.</p><p>This work is done in the framework of European Commission's Horizon 2020 project "BETTER”. More information is available on the website https://www.ec-better.eu/.</p>

Author(s):  
T. Bibi ◽  
K. Azahari Razak ◽  
A. Abdul Rahman ◽  
A. Latif

Landslides are an inescapable natural disaster, resulting in massive social, environmental and economic impacts all over the world. The tropical, mountainous landscape in generally all over Malaysia especially in eastern peninsula (Borneo) is highly susceptible to landslides because of heavy rainfall and tectonic disturbances. The purpose of the Landslide hazard mapping is to identify the hazardous regions for the execution of mitigation plans which can reduce the loss of life and property from future landslide incidences. Currently, the Malaysian research bodies e.g. academic institutions and government agencies are trying to develop a landslide hazard and risk database for susceptible areas to backing the prevention, mitigation, and evacuation plan. However, there is a lack of devotion towards landslide inventory mapping as an elementary input of landslide susceptibility, hazard and risk mapping. <br><br> The developing techniques based on remote sensing technologies (satellite, terrestrial and airborne) are promising techniques to accelerate the production of landslide maps, shrinking the time and resources essential for their compilation and orderly updates. The aim of the study is to provide a better perception regarding the use of virtual mapping of landslides with the help of LiDAR technology. The focus of the study is spatio temporal detection and virtual mapping of landslide inventory via visualization and interpretation of very high-resolution data (VHR) in forested terrain of Mesilau river, Kundasang. However, to cope with the challenges of virtual inventory mapping on in forested terrain high resolution LiDAR derivatives are used. This study specifies that the airborne LiDAR technology can be an effective tool for mapping landslide inventories in a complex climatic and geological conditions, and a quick way of mapping regional hazards in the tropics.


2020 ◽  
Vol 12 (3) ◽  
pp. 346 ◽  
Author(s):  
Nikhil Prakash ◽  
Andrea Manconi ◽  
Simon Loew

Mapping landslides using automated methods is a challenging task, which is still largely done using human efforts. Today, the availability of high-resolution EO data products is increasing exponentially, and one of the targets is to exploit this data source for the rapid generation of landslide inventory. Conventional methods like pixel-based and object-based machine learning strategies have been studied extensively in the last decade. In addition, recent advances in CNN (convolutional neural network), a type of deep-learning method, has been widely successful in extracting information from images and have outperformed other conventional learning methods. In the last few years, there have been only a few attempts to adapt CNN for landslide mapping. In this study, we introduce a modified U-Net model for semantic segmentation of landslides at a regional scale from EO data using ResNet34 blocks for feature extraction. We also compare this with conventional pixel-based and object-based methods. The experiment was done in Douglas County, a study area selected in the south of Portland in Oregon, USA, and landslide inventory extracted from SLIDO (Statewide Landslide Information Database of Oregon) was considered as the ground truth. Landslide mapping is an imbalanced learning problem with very limited availability of training data. Our network was trained on a combination of focal Tversky loss and cross-entropy loss functions using augmented image tiles sampled from a selected training area. The deep-learning method was observed to have a better performance than the conventional methods with an MCC (Matthews correlation coefficient) score of 0.495 and a POD (probability of detection) rate of 0.72 .


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
...  

2020 ◽  
Vol 26 ◽  
Author(s):  
Xiaoping Min ◽  
Fengqing Lu ◽  
Chunyan Li

: Enhancer-promoter interactions (EPIs) in the human genome are of great significance to transcriptional regulation which tightly controls gene expression. Identification of EPIs can help us better deciphering gene regulation and understanding disease mechanisms. However, experimental methods to identify EPIs are constrained by the fund, time and manpower while computational methods using DNA sequences and genomic features are viable alternatives. Deep learning methods have shown promising prospects in classification and efforts that have been utilized to identify EPIs. In this survey, we specifically focus on sequence-based deep learning methods and conduct a comprehensive review of the literatures of them. We first briefly introduce existing sequence-based frameworks on EPIs prediction and their technique details. After that, we elaborate on the dataset, pre-processing means and evaluation strategies. Finally, we discuss the challenges these methods are confronted with and suggest several future opportunities.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3719
Author(s):  
Aoxin Ni ◽  
Arian Azarang ◽  
Nasser Kehtarnavaz

The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


Sign in / Sign up

Export Citation Format

Share Document