scholarly journals Quantification of Continuous Flood Hazard using Random Forrest Classification and Flood Insurance Claims at Large Spatial Scales: A Pilot Study in Southeast Texas

2020 ◽  
Author(s):  
William Mobley ◽  
Antonia Sebastian ◽  
Russell Blessing ◽  
Wesley E. Highfield ◽  
Laura Stearns ◽  
...  

Abstract. Pre-disaster planning and mitigation necessitates detailed spatial information about flood hazards and their associated risks. In the U.S., the FEMA Special Flood Hazard Area (SFHA) provides important information about areas subject to flooding during the 1 % riverine or coastal event. The binary nature of flood hazard maps obscures the distribution of property risk inside of the SFHA and the residual risk outside of the SFHA, which can undermine mitigation efforts. Machine-learning techniques provide an alternative approach to estimating flood hazards across large spatial scales at low computational expense. This study presents a pilot study for the Texas Gulf Coast Region using Random Forest Classification to predict flood probability across a 30,523 km2 area. Using a record of National Flood Insurance Program (NFIP) claims dating back to 1976 and high-resolution geospatial data, we generate a continuous flood hazard map for twelve USGS HUC-8 watersheds. Results indicate that the Random Forest model predicts flooding with a high sensitivity (AUC 0.895), especially compared to the existing FEMA regulatory floodplain. Our model identifies 649,000 structures with at least a 1 % annual chance of flooding, roughly three times more than are currently identified by FEMA as flood prone.

2021 ◽  
Vol 21 (2) ◽  
pp. 807-822
Author(s):  
William Mobley ◽  
Antonia Sebastian ◽  
Russell Blessing ◽  
Wesley E. Highfield ◽  
Laura Stearns ◽  
...  

Abstract. Pre-disaster planning and mitigation necessitate detailed spatial information about flood hazards and their associated risks. In the US, the Federal Emergency Management Agency (FEMA) Special Flood Hazard Area (SFHA) provides important information about areas subject to flooding during the 1 % riverine or coastal event. The binary nature of flood hazard maps obscures the distribution of property risk inside of the SFHA and the residual risk outside of the SFHA, which can undermine mitigation efforts. Machine learning techniques provide an alternative approach to estimating flood hazards across large spatial scales at low computational expense. This study presents a pilot study for the Texas Gulf Coast region using random forest classification to predict flood probability across a 30 523 km2 area. Using a record of National Flood Insurance Program (NFIP) claims dating back to 1976 and high-resolution geospatial data, we generate a continuous flood hazard map for 12 US Geological Survey (USGS) eight-digit hydrologic unit code (HUC) watersheds. Results indicate that the random forest model predicts flooding with a high sensitivity (area under the curve, AUC: 0.895), especially compared to the existing FEMA regulatory floodplain. Our model identifies 649 000 structures with at least a 1 % annual chance of flooding, roughly 3 times more than are currently identified by FEMA as flood-prone.


2017 ◽  
Author(s):  
Adam Luke ◽  
Brett F. Sanders ◽  
Kristen Goodrich ◽  
David L. Feldman ◽  
Danielle Boudreau ◽  
...  

Abstract. Flood hazard mapping in the United States (US) is deeply tied to the National Flood Insurance Program (NFIP). Consequently, publicly available flood maps provide essential information for insurance purposes, but do not necessarily provide relevant information for non-insurance aspects of flood risk management (FRM) such as public education and emergency planning. Recent calls for flood hazard maps that support a wider variety of FRM tasks highlight the need to deepen our understanding about the factors that make flood maps useful and understandable for local end-users. In this study, social scientists and engineers explore opportunities for improving the utility and relevance of flood hazard maps through the co-production of maps responsive to end-users' FRM needs. Specifically, two-dimensional flood modeling produced a set of baseline hazard maps for stakeholders of the Tijuana River Valley, US, and Los Laureles Canyon in Tijuana, Mexico. Focus groups with natural resource managers, city planners, emergency managers, academia, non-profit, and community leaders refined the baseline hazard maps by triggering additional modeling scenarios and map revisions. Several important end-user preferences emerged, such as (1) legends that frame flood intensity both qualitatively and quantitatively, and (2) flood scenario descriptions that report flood magnitude in terms of rainfall, streamflow, and its relation to an historic event. Regarding desired hazard map content, end-users' requests revealed general consistency with mapping needs reported in European studies and guidelines published in Australia. However, requested map content that is not commonly produced included: (1) standing water depths following the flood, (2) the erosive potential of flowing water, and (3) pluvial flood hazards, or flooding caused directly by rainfall. We conclude that the relevance and utility of commonly produced flood hazard maps can be most improved by illustrating pluvial flood hazards and by using concrete reference points to describe flooding scenarios rather than exceedance probabilities or frequencies.


Author(s):  
Tomislav Hengl ◽  
Madlene Nussbaum ◽  
Marvin N Wright ◽  
Gerard B.M. Heuvelink

Random forest and similar Machine Learning techniques are already used to generate spatial predictions, but spatial location of points (geography) is often ignored in the modeling process. Spatial auto-correlation, especially if still existent in the cross-validation residuals, indicates that the predictions are maybe biased, and this is suboptimal. This paper presents a random forest for spatial predictions framework (RFsp) where buffer distances from observation points are used as explanatory variables, thus incorporating geographical proximity effects into the prediction process. The RFsp framework is illustrated with examples that use textbook datasets and apply spatial and spatio-temporal prediction to numeric, binary, categorical, multivariate and spatiotemporal variables. Performance of the RFsp framework is compared with the state-of-the-art kriging techniques using 5--fold cross-validation with refitting. The results show that RFsp can obtain equally accurate and unbiased predictions as different versions of kriging. Advantages of using RFsp over kriging are that it needs no rigid statistical assumptions about the distribution and stationarity of the target variable, it is more flexible towards incorporating, combining and extending covariates of different types, and it possibly yields more informative maps characterizing the prediction error. RFsp appears to be especially attractive for building multivariate spatial prediction models that can be used as "knowledge engines" in various geoscience fields. Some disadvantages of RFsp are the exponentially growing computational intensity with increase of calibration data and covariates and the high sensitivity of predictions to input data quality. For many data sets, especially those with lower number of points and covariates and close-to-linear relationships, model-based geostatistics can still lead to more accurate predictions than RFsp.


2018 ◽  
Vol 18 (4) ◽  
pp. 1097-1120 ◽  
Author(s):  
Adam Luke ◽  
Brett F. Sanders ◽  
Kristen A. Goodrich ◽  
David L. Feldman ◽  
Danielle Boudreau ◽  
...  

Abstract. Flood hazard mapping in the United States (US) is deeply tied to the National Flood Insurance Program (NFIP). Consequently, publicly available flood maps provide essential information for insurance purposes, but they do not necessarily provide relevant information for non-insurance aspects of flood risk management (FRM) such as public education and emergency planning. Recent calls for flood hazard maps that support a wider variety of FRM tasks highlight the need to deepen our understanding about the factors that make flood maps useful and understandable for local end users. In this study, social scientists and engineers explore opportunities for improving the utility and relevance of flood hazard maps through the co-production of maps responsive to end users' FRM needs. Specifically, two-dimensional flood modeling produced a set of baseline hazard maps for stakeholders of the Tijuana River valley, US, and Los Laureles Canyon in Tijuana, Mexico. Focus groups with natural resource managers, city planners, emergency managers, academia, non-profit, and community leaders refined the baseline hazard maps by triggering additional modeling scenarios and map revisions. Several important end user preferences emerged, such as (1) legends that frame flood intensity both qualitatively and quantitatively, and (2) flood scenario descriptions that report flood magnitude in terms of rainfall, streamflow, and its relation to an historic event. Regarding desired hazard map content, end users' requests revealed general consistency with mapping needs reported in European studies and guidelines published in Australia. However, requested map content that is not commonly produced included (1) standing water depths following the flood, (2) the erosive potential of flowing water, and (3) pluvial flood hazards, or flooding caused directly by rainfall. We conclude that the relevance and utility of commonly produced flood hazard maps can be most improved by illustrating pluvial flood hazards and by using concrete reference points to describe flooding scenarios rather than exceedance probabilities or frequencies.


2019 ◽  
Vol 2 (1) ◽  
pp. 41-52
Author(s):  
Nitin Mundhe

Floods are natural risk with a very high frequency, which causes to environmental, social, economic and human losses. The floods in the town happen mainly due to human made activities about the blockage of natural drainage, haphazard construction of roads, building, and high rainfall intensity. Detailed maps showing flood vulnerability areas are helpful in management of flood hazards. Therefore, present research focused on identifying flood vulnerability zones in the Pune City using multi-criteria decision-making approach in Geographical Information System (GIS) and inputs from remotely sensed imageries. Other input data considered for preparing base maps are census details, City maps, and fieldworks. The Pune City classified in to four flood vulnerability classes essential for flood risk management. About 5 per cent area shows high vulnerability for floods in localities namely Wakdewadi, some part of the Shivajinagar, Sangamwadi, Aundh, and Baner with high risk.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-21
Author(s):  
Roy Abitbol ◽  
Ilan Shimshoni ◽  
Jonathan Ben-Dov

The task of assembling fragments in a puzzle-like manner into a composite picture plays a significant role in the field of archaeology as it supports researchers in their attempt to reconstruct historic artifacts. In this article, we propose a method for matching and assembling pairs of ancient papyrus fragments containing mostly unknown scriptures. Papyrus paper is manufactured from papyrus plants and therefore portrays typical thread patterns resulting from the plant’s stems. The proposed algorithm is founded on the hypothesis that these thread patterns contain unique local attributes such that nearby fragments show similar patterns reflecting the continuations of the threads. We posit that these patterns can be exploited using image processing and machine learning techniques to identify matching fragments. The algorithm and system which we present support the quick and automated classification of matching pairs of papyrus fragments as well as the geometric alignment of the pairs against each other. The algorithm consists of a series of steps and is based on deep-learning and machine learning methods. The first step is to deconstruct the problem of matching fragments into a smaller problem of finding thread continuation matches in local edge areas (squares) between pairs of fragments. This phase is solved using a convolutional neural network ingesting raw images of the edge areas and producing local matching scores. The result of this stage yields very high recall but low precision. Thus, we utilize these scores in order to conclude about the matching of entire fragments pairs by establishing an elaborate voting mechanism. We enhance this voting with geometric alignment techniques from which we extract additional spatial information. Eventually, we feed all the data collected from these steps into a Random Forest classifier in order to produce a higher order classifier capable of predicting whether a pair of fragments is a match. Our algorithm was trained on a batch of fragments which was excavated from the Dead Sea caves and is dated circa the 1st century BCE. The algorithm shows excellent results on a validation set which is of a similar origin and conditions. We then tried to run the algorithm against a real-life set of fragments for which we have no prior knowledge or labeling of matches. This test batch is considered extremely challenging due to its poor condition and the small size of its fragments. Evidently, numerous researchers have tried seeking matches within this batch with very little success. Our algorithm performance on this batch was sub-optimal, returning a relatively large ratio of false positives. However, the algorithm was quite useful by eliminating 98% of the possible matches thus reducing the amount of work needed for manual inspection. Indeed, experts that reviewed the results have identified some positive matches as potentially true and referred them for further investigation.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Lydia Moussa ◽  
Shalom Benrimoj ◽  
Katarzyna Musial ◽  
Simon Kocbek ◽  
Victoria Garcia-Cardenas

Abstract Background Implementation research has delved into barriers to implementing change and interventions for the implementation of innovation in practice. There remains a gap, however, that fails to connect implementation barriers to the most effective implementation strategies and provide a more tailored approach during implementation. This study aimed to explore barriers for the implementation of professional services in community pharmacies and to predict the effectiveness of facilitation strategies to overcome implementation barriers using machine learning techniques. Methods Six change facilitators facilitated a 2-year change programme aimed at implementing professional services across community pharmacies in Australia. A mixed methods approach was used where barriers were identified by change facilitators during the implementation study. Change facilitators trialled and recorded tailored facilitation strategies delivered to overcome identified barriers. Barriers were coded according to implementation factors derived from the Consolidated Framework for Implementation Research and the Theoretical Domains Framework. Tailored facilitation strategies were coded into 16 facilitation categories. To predict the effectiveness of these strategies, data mining with random forest was used to provide the highest level of accuracy. A predictive resolution percentage was established for each implementation strategy in relation to the barriers that were resolved by that particular strategy. Results During the 2-year programme, 1131 barriers and facilitation strategies were recorded by change facilitators. The most frequently identified barriers were a ‘lack of ability to plan for change’, ‘lack of internal supporters for the change’, ‘lack of knowledge and experience’, ‘lack of monitoring and feedback’, ‘lack of individual alignment with the change’, ‘undefined change objectives’, ‘lack of objective feedback’ and ‘lack of time’. The random forest algorithm used was able to provide 96.9% prediction accuracy. The strategy category with the highest predicted resolution rate across the most number of implementation barriers was ‘to empower stakeholders to develop objectives and solve problems’. Conclusions Results from this study have provided a better understanding of implementation barriers in community pharmacy and how data-driven approaches can be used to predict the effectiveness of facilitation strategies to overcome implementation barriers. Tailored facilitation strategies such as these can increase the rate of real-time implementation of innovations in healthcare, leading to an industry that can confidently and efficiently adapt to continuous change.


Molecules ◽  
2021 ◽  
Vol 26 (6) ◽  
pp. 1537
Author(s):  
Aneta Saletnik ◽  
Bogdan Saletnik ◽  
Czesław Puchalski

Raman spectroscopy is one of the main analytical techniques used in optical metrology. It is a vibration, marker-free technique that provides insight into the structure and composition of tissues and cells at the molecular level. Raman spectroscopy is an outstanding material identification technique. It provides spatial information of vibrations from complex biological samples which renders it a very accurate tool for the analysis of highly complex plant tissues. Raman spectra can be used as a fingerprint tool for a very wide range of compounds. Raman spectroscopy enables all the polymers that build the cell walls of plants to be tracked simultaneously; it facilitates the analysis of both the molecular composition and the molecular structure of cell walls. Due to its high sensitivity to even minute structural changes, this method is used for comparative tests. The introduction of new and improved Raman techniques by scientists as well as the constant technological development of the apparatus has resulted in an increased importance of Raman spectroscopy in the discovery and defining of tissues and the processes taking place in them.


2017 ◽  
Vol 107 (10) ◽  
pp. 1187-1198 ◽  
Author(s):  
L. Wen ◽  
C. R. Bowen ◽  
G. L. Hartman

Dispersal of urediniospores by wind is the primary means of spread for Phakopsora pachyrhizi, the cause of soybean rust. Our research focused on the short-distance movement of urediniospores from within the soybean canopy and up to 61 m from field-grown rust-infected soybean plants. Environmental variables were used to develop and compare models including the least absolute shrinkage and selection operator regression, zero-inflated Poisson/regular Poisson regression, random forest, and neural network to describe deposition of urediniospores collected in passive and active traps. All four models identified distance of trap from source, humidity, temperature, wind direction, and wind speed as the five most important variables influencing short-distance movement of urediniospores. The random forest model provided the best predictions, explaining 76.1 and 86.8% of the total variation in the passive- and active-trap datasets, respectively. The prediction accuracy based on the correlation coefficient (r) between predicted values and the true values were 0.83 (P < 0.0001) and 0.94 (P < 0.0001) for the passive and active trap datasets, respectively. Overall, multiple machine learning techniques identified the most important variables to make the most accurate predictions of movement of P. pachyrhizi urediniospores short-distance.


2017 ◽  
Vol 114 (37) ◽  
pp. 9785-9790 ◽  
Author(s):  
Hamed R. Moftakhari ◽  
Gianfausto Salvadori ◽  
Amir AghaKouchak ◽  
Brett F. Sanders ◽  
Richard A. Matthew

Sea level rise (SLR), a well-documented and urgent aspect of anthropogenic global warming, threatens population and assets located in low-lying coastal regions all around the world. Common flood hazard assessment practices typically account for one driver at a time (e.g., either fluvial flooding only or ocean flooding only), whereas coastal cities vulnerable to SLR are at risk for flooding from multiple drivers (e.g., extreme coastal high tide, storm surge, and river flow). Here, we propose a bivariate flood hazard assessment approach that accounts for compound flooding from river flow and coastal water level, and we show that a univariate approach may not appropriately characterize the flood hazard if there are compounding effects. Using copulas and bivariate dependence analysis, we also quantify the increases in failure probabilities for 2030 and 2050 caused by SLR under representative concentration pathways 4.5 and 8.5. Additionally, the increase in failure probability is shown to be strongly affected by compounding effects. The proposed failure probability method offers an innovative tool for assessing compounding flood hazards in a warming climate.


Sign in / Sign up

Export Citation Format

Share Document