scholarly journals A Machine Learning based Approach for Wildfire Susceptibility Mapping. The Case Study of Liguria Region in Italy

Author(s):  
Marj Tonini ◽  
Mirko D'Andrea ◽  
Guido Biondi ◽  
Silvia Degli Esposti ◽  
Andrea Trucchia ◽  
...  

Wildfire susceptibility maps display the wildfires occurrence probability, ranked from low to high, under a given environmental context. Current studies in this field often rely on expert knowledge, including or not statistical models allowing to assess the cause-effect correlation. Machine learning (ML) algorithms can perform very well and be more generalizable thanks to their capability of learning from and make predictions on data. Italy is highly affected by wildfires due to the high heterogeneity of the territory and to the predisposing meteorological conditions. The main objective of the present study is to elaborate a wildfire susceptibility map for Liguria region (Italy) by applying Random Forest, an ensemble ML algorithm based on decision trees. Susceptibility was assessed by evaluating the probability for an area to burn in the future considering where wildfires occurred in the past and which are the geo-environmental factors that favor their spread. Different models were compared, including or not the neighboring vegetation and using an increasing number of folds for the spatial-cross validation. Susceptibility maps for the two fire seasons were finally elaborated and validated and results critically discussed highlighting the capacity of the proposed approach to identify the efficiency of fire fighting activities.

2020 ◽  
Author(s):  
Paolo Fiorucci ◽  
Mirko D'Andrea ◽  
Andrea Trucchia ◽  
Marj Tonini

<p>Risk and susceptibility analyses for  natural hazards are of great importance for the sake of  civil protection, land use planning  and risk reduction programs. Susceptibility maps are based on the assumption that future events are expected to occur under similar conditions as the observed ones. Each unit area is assessed in term of relative spatial likelihood, evaluating the potential to experience a particular hazard in the future based solely on the intrinsic local characteristics. These concept is well-consolidated in the research area related with the risk assessment, especially for landslides. Nevertheless, the need exist for developing new quantitative and robust methods allowing to elaborate susceptibility  maps and to apply this tool to the study of other natural hazards.  In  the presented work, such  task is pursued for the specific  case of wildfires in Italy. The  two main approaches for such studies are the adoption  of physically based models and the data driven methods. In  the presented work, the latter  approach is  pursued, using  Machine Learning techniques in order to learn  from and make prediction  on the available information (i.e. the observed burned area and the predisposing factors) . Italy is severely affected by wildfires due to the high topographic and vegetation heterogeneity of its territory  and  to  its   meteorological conditions. The present study has as its main objective the  elaboration of a wildfire susceptibility map for Liguria region (Italy) by making use of Random Forest, an ensemble ML algorithm based on decision trees. The quantitative evaluation of susceptibility is carried out considering two different aspects: the location of past  wildfire occurrences, in terms of burned area, and the related anthropogenic and geo-environmental  predisposing factors that may favor fire spread. Different implementation of the model  were performed and compared. In  particular,  the effect of  a pixel's  neighboring land cover (including the type of vegetation and no-burnable area) on the output susceptibility map is investigated. In order to assess the  performance  of the model, the spatial-cross validation has been carried  out, trying  out different  number of folders. Susceptibility maps for the two fire seasons (the  summer  and  the winter  one) were finally computed  and validated. The  resulting  maps show  higher susceptibility zones , developing closer to the coast in summer and along the interior part of  the region in winter. Such zones matched well with the testing burned area, thus  proving the  overall  good performance of the proposed method.</p><p><strong>REFERENCE</strong></p><p> Tonini M., D’Andrea M., Biondi G., Degli Esposti S.; Fiorucci P., A machine learning based approach for wildfire susceptibility mapping. The case study of Liguria region in Italy. <em>Geosciences</em> (2020, submitted)</p><p><br><br></p>


Geosciences ◽  
2020 ◽  
Vol 10 (3) ◽  
pp. 105 ◽  
Author(s):  
Marj Tonini ◽  
Mirko D’Andrea ◽  
Guido Biondi ◽  
Silvia Degli Esposti ◽  
Andrea Trucchia ◽  
...  

Wildfire susceptibility maps display the spatial probability of an area to burn in the future, based solely on the intrinsic local proprieties of a site. Current studies in this field often rely on statistical models, often improved by expert knowledge for data retrieving and processing. In the last few years, machine learning algorithms have proven to be successful in this domain, thanks to their capability of learning from data through the modeling of hidden relationships. In the present study, authors introduce an approach based on random forests, allowing elaborating a wildfire susceptibility map for the Liguria region in Italy. This region is highly affected by wildfires due to the dense and heterogeneous vegetation, with more than 70% of its surface covered by forests, and due to the favorable climatic conditions. Susceptibility was assessed by considering the dataset of the mapped fire perimeters, spanning a 21-year period (1997–2017) and different geo-environmental predisposing factors (i.e., land cover, vegetation type, road network, altitude, and derivatives). One main objective was to compare different models in order to evaluate the effect of: (i) including or excluding the neighboring vegetation type as additional predisposing factors and (ii) using an increasing number of folds in the spatial-cross validation procedure. Susceptibility maps for the two fire seasons were finally elaborated and validated. Results highlighted the capacity of the proposed approach to identify areas that could be affected by wildfires in the near future, as well as its goodness in assessing the efficiency of fire-fighting activities.


Author(s):  
Mahendra Awale ◽  
Jean-Louis Reymond

<div>Here we report PPB2 as a target prediction tool assigning targets to a query molecule based on ChEMBL data. PPB2 computes ligand similarities using molecular fingerprints encoding composition (MQN), molecular shape and pharmacophores (Xfp), and substructures (ECfp4), and features an unprecedented combination of nearest neighbor (NN) searches and Naïve Bayes (NB) machine learning, together with simple NN searches, NB and Deep Neural Network (DNN) machine learning models as further options. Although NN(ECfp4) gives the best results in terms of recall in a 10-fold cross-validation study, combining NN searches with NB machine learning provides superior precision statistics, as well as better results in a case study predicting off-targets of a recently reported TRPV6 calcium channel inhibitor, illustrating the value of this combined approach. PPB2 is available to assess possible off-targets of small molecule drug-like compounds by public access at ppb2.gdb.tools.</div>


Author(s):  
M. Sh. Tehrany ◽  
S. Jones

This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63&amp;thinsp;%, 76&amp;thinsp;%, 88&amp;thinsp;%, 80&amp;thinsp;%, 74&amp;thinsp;%, 71&amp;thinsp;% and 65&amp;thinsp;% respectively. Evidently, using the polygon format of the inventory data didn’t lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2346
Author(s):  
Rob Argent ◽  
Antonio Bevilacqua ◽  
Alison Keogh ◽  
Ailish Daly ◽  
Brian Caulfield

Machine learning models are being utilized to provide wearable sensor-based exercise biofeedback to patients undertaking physical therapy. However, most systems are validated at a technical level using lab-based cross validation approaches. These results do not necessarily reflect the performance levels that patients and clinicians can expect in the real-world environment. This study aimed to conduct a thorough evaluation of an example wearable exercise biofeedback system from laboratory testing through to clinical validation in the target setting, illustrating the importance of context when validating such systems. Each of the various components of the system were evaluated independently, and then in combination as the system is designed to be deployed. The results show a reduction in overall system accuracy between lab-based cross validation (>94%), testing on healthy participants (n = 10) in the target setting (>75%), through to test data collected from the clinical cohort (n = 11) (>59%). This study illustrates that the reliance on lab-based validation approaches may be misleading key stakeholders in the inertial sensor-based exercise biofeedback sector, makes recommendations for clinicians, developers and researchers, and discusses factors that may influence system performance at each stage of evaluation.


2013 ◽  
Vol 1 (2) ◽  
pp. 1001-1050 ◽  
Author(s):  
H. Petschko ◽  
A. Brenning ◽  
R. Bell ◽  
J. Goetz ◽  
T. Glade

Abstract. Landslide susceptibility maps are helpful tools to identify areas which might be prone to future landslide occurrence. As more and more national and provincial authorities demand for these maps to be computed and implemented in spatial planning strategies, the quality of the landslide susceptibility map and of the model applied to compute them is of high interest. In this study we focus on the analysis of the model performance by a repeated k-fold cross-validation with spatial and random subsampling. Furthermore, the focus is on the analysis of the implications of uncertainties expressed by confidence intervals of model predictions. The cross-validation performance assessments reflects the variability of performance estimates compared to single hold-out validation approaches that produce only a single estimate. The analysis of the confidence intervals shows that in 85% of the study area, the 95% confidence limits fall within the same susceptibility class. However, there are cases where confidence intervals overlap with all classes from the lowest to the highest class of susceptibility to landsliding. Locations whose confidence intervals intersect with more than one susceptibility class are of high interest because this uncertainty may affect spatial planning processes that are based on the susceptibility level.


2018 ◽  
Author(s):  
Mahendra Awale ◽  
Jean-Louis Reymond

<div>Here we report PPB2 as a target prediction tool assigning targets to a query molecule based on ChEMBL data. PPB2 computes ligand similarities using molecular fingerprints encoding composition (MQN), molecular shape and pharmacophores (Xfp), and substructures (ECfp4), and features an unprecedented combination of nearest neighbor (NN) searches and Naïve Bayes (NB) machine learning, together with simple NN searches, NB and Deep Neural Network (DNN) machine learning models as further options. Although NN(ECfp4) gives the best results in terms of recall in a 10-fold cross-validation study, combining NN searches with NB machine learning provides superior precision statistics, as well as better results in a case study predicting off-targets of a recently reported TRPV6 calcium channel inhibitor, illustrating the value of this combined approach. PPB2 is available to assess possible off-targets of small molecule drug-like compounds by public access at ppb2.gdb.tools.</div>


2016 ◽  
Vol 47 (3) ◽  
pp. 1539 ◽  
Author(s):  
P. Tsangaratos ◽  
D. Rozos

In this paper two semi - quantative approaches, from the domain of Multi criteria decision analysis, such as Rock Engineering Systems (RES) and Analytic Hierarchical Process (AHP) are implemented for weighting and ranking landslide related factors in an objective manner. Through the use of GIS these approaches provide a highly accurate landslide susceptibility map. For this purpose and in order to automate the process, the Expert Knowledge for Landslide Assessment Tool (EKLATool) was developed as an extension tightly integrated in the ArcMap environment, using ArcObjects and Visual Basic script codes. The EKLATool was implemented in an area of Xanthi Prefecture, Greece, where a spatial database of landslide incidence was  available


2018 ◽  
Vol 8 (11) ◽  
pp. 2165 ◽  
Author(s):  
Wahyu Caesarendra ◽  
Bobby Pappachan ◽  
Tomi Wijaya ◽  
Daryl Lee ◽  
Tegoeh Tjahjowidodo ◽  
...  

The number of studies on the Internet of Things (IoT) has grown significantly in the past decade and has been applied in various fields. The IoT term sounds like it is specifically for computer science but it has actually been widely applied in the engineering field, especially in industrial applications, e.g., manufacturing processes. The number of published papers in the IoT has also increased significantly, addressing various applications. A particular application of the IoT in these industries has brought in a new term, the so-called Industrial IoT (IIoT). This paper concisely reviews the IoT from the perspective of industrial applications, in particular, the major pillars in order to build an IoT application, i.e., architectural and cloud computing. This enabled readers to understand the concept of the IIoT and to identify the starting point. A case study of the Amazon Web Services Machine Learning (AML) platform for the chamfer length prediction of deburring processes is presented. An experimental setup of the deburring process and steps that must be taken to apply AML practically are also presented.


Sign in / Sign up

Export Citation Format

Share Document