scholarly journals Risk Management and Analytics in Wildfire Response

2019 ◽  
Vol 5 (4) ◽  
pp. 226-239 ◽  
Author(s):  
Matthew P. Thompson ◽  
Yu Wei ◽  
David E. Calkin ◽  
Christopher D. O’Connor ◽  
Christopher J. Dunn ◽  
...  

Abstract Purpose of Review The objectives of this paper are to briefly review basic risk management and analytics concepts, describe their nexus in relation to wildfire response, demonstrate real-world application of analytics to support response decisions and organizational learning, and outline an analytics strategy for the future. Recent Findings Analytics can improve decision-making and organizational performance across a variety of areas from sports to business to real-time emergency response. A lack of robust descriptive analytics on wildfire incident response effectiveness is a bottleneck for developing operationally relevant and empirically credible predictive and prescriptive analytics to inform and guide strategic response decisions. Capitalizing on technology such as automated resource tracking and machine learning algorithms can help bridge gaps between monitoring, learning, and data-driven decision-making. Summary By investing in better collection, documentation, archiving, and analysis of operational data on response effectiveness, fire management organizations can promote systematic learning and provide a better evidence base to support response decisions. We describe an analytics management framework that can provide structure to help deploy analytics within organizations, and provide real-world examples of advanced fire analytics applied in the USA. To fully capitalize on the potential of analytics, organizations may need to catalyze cultural shifts that cultivate stronger appreciation for data-driven decision processes, and develop informed skeptics that effectively balance both judgment and analysis in decision-making.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


2017 ◽  
Vol 26 (7) ◽  
pp. 551 ◽  
Author(s):  
Christopher J. Dunn ◽  
David E. Calkin ◽  
Matthew P. Thompson

Wildfire’s economic, ecological and social impacts are on the rise, fostering the realisation that business-as-usual fire management in the United States is not sustainable. Current response strategies may be inefficient and contributing to unnecessary responder exposure to hazardous conditions, but significant knowledge gaps constrain clear and comprehensive descriptions of how changes in response strategies and tactics may improve outcomes. As such, we convened a special session at an international wildfire conference to synthesise ongoing research focused on obtaining a better understanding of wildfire response decisions and actions. This special issue provides a collection of research that builds on those discussions. Four papers focus on strategic planning and decision making, three papers on use and effectiveness of suppression resources and two papers on allocation and movement of suppression resources. Here we summarise some of the key findings from these papers in the context of risk-informed decision making. This collection illustrates the value of a risk management framework for improving wildfire response safety and effectiveness, for enhancing fire management decision making and for ushering in a new fire management paradigm.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2021 ◽  
Author(s):  
Ali Nadernezhad ◽  
Jürgen Groll

With the continuous growth of extrusion bioprinting techniques, ink formulations based on rheology modifiers are becoming increasingly popular, as they enable 3D printing of non-printable biologically-favored materials. However, benchmarking and characterization of such systems are inherently complicated due to the variety of rheology modifiers and differences in mechanisms of inducing printability. This study tries to explain induced printability in formulations by incorporating machine learning algorithms that describe the underlying basis for decision-making in classifying a printable formulation. For this purpose, a library of rheological data and printability scores for 180 different formulations of hyaluronic acid solutions with varying molecular weights and concentrations and three rheology modifiers were produced. A feature screening methodology was applied to collect and separate the impactful features, which consisted of physically interpretable and easily measurable properties of formulations. In the final step, all relevant features influencing the model’s output were analyzed by advanced yet explainable statistical methods. The outcome provides a guideline for designing new formulations based on data-driven correlations from multiple systems.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 520-520 ◽  
Author(s):  
André Pfob ◽  
Babak Mehrara ◽  
Jonas Nelson ◽  
Edwin G. Wilkins ◽  
Andrea Pusic ◽  
...  

520 Background: Post-surgical satisfaction with breasts is a key outcome for women undergoing cancer-related mastectomy and reconstruction. Current decision making relies on group-level evidence, which may not offer optimal choice of treatment for individuals. We developed and validated machine learning algorithms to predict individual post-surgical breast-satisfaction. We aim to facilitate individualized data-driven decision making in breast cancer. Methods: We collected clinical, perioperative, and patient-reported data from 3058 women who underwent breast reconstruction due to breast cancer across 11 sites in North America. We trained and evaluated four algorithms (regularized regression, Support Vector Machine, Neural Network, Regression Tree) to predict significant changes in satisfaction with breasts at 2-year follow up using the validated BREAST-Q measure. Accuracy and area under the receiver operating characteristics curve (AUC) were used to determine algorithm performance in the test sample. Results: Machine learning algorithms were able to accurately predict changes in women’s satisfaction with breasts (see table). Baseline satisfaction with breasts was the most informative predictor of outcome, followed by radiation during or after reconstruction, nipple-sparing and mixed mastectomy, implant-based reconstruction, chemotherapy, unilateral mastectomy, lower psychological well-being, and obesity. Conclusions: We reveal the crucial role of patient-reported outcomes in determining post-operative outcomes and that Machine Learning algorithms are suitable to identify individuals who might benefit from alternative treatment decisions than suggested by group-level evidence. We provide a web-based tool for individuals considering mastectomy and reconstruction. importdemo.com . Clinical trial information: NCT01723423 . [Table: see text]


Author(s):  
Peter Kokol ◽  
Jan Jurman ◽  
Tajda Bogovič ◽  
Tadej Završnik ◽  
Jernej Završnik ◽  
...  

Cardiovascular diseases are one of the leading global causes of death. Following the positive experiences with machine learning in medicine we performed a study in which we assessed how machine learning can support decision making regarding coronary artery diseases. While a plethora of studies reported high accuracy rates of machine learning algorithms (MLA) in medical applications, the majority of the studies used the cleansed medical data bases without the presence of the “real world noise.” Contrary, the aim of our study was to perform machine learning on the routinely collected Anonymous Cardiovascular Database (ACD), extracted directly from a hospital information system of the University Medical Centre Maribor). Many studies used tens of different machine learning approaches with substantially varying results regarding accuracy (ACU), hence they were not usable as a base to validate the results of our study. Thus, we decided, that our study will be performed in the 2 phases. During the first phase we trained the different MLAs on a comparable University of California Irvine UCI Heart Disease Dataset. The aim of this phase was first to define the “standard” ACU values and second to reduce the set of all MLAs to the most appropriate candidates to be used on the ACD, during the second phase. Seven MLAs were selected and the standard ACUs for the 2-class diagnosis were 0.85. Surprisingly, the same MLAs achieved the ACUs around 0.96 on the ACD. A general comparison of both databases revealed that different machine learning algorithms performance differ significantly. The accuracy on the ACD reached the highest levels using decision trees and neural networks while Liner regression and AdaBoost performed best in UCI database. This might indicate that decision trees based algorithms and neural networks are better in coping with real world not “noise free” clinical data and could successfully support decision making concerned with coronary diseasesmachine learning.


2016 ◽  
Vol 5 (4) ◽  
pp. 34 ◽  
Author(s):  
Jack Weiner ◽  
Mohan Tanniru ◽  
Jiban Khuntia ◽  
David Bobryk ◽  
Mehul Naik ◽  
...  

Background: Regulatory and competitive pressures and the need for cross-organizational data sharing are demanding that hospital leaders create a data-driven decision making culture to improve performance. Using an innovation assimilation strategy framework, this paper describes how a hospital used its implementation of a Real Time Dashboard System (rtDashboard) to improve performance, change its organizational culture and put it on a path towards digital leadership (DL).Objective: Implement an rtDashboard system that can support a data-driven decision making culture for performance improvement while engaging business and information technology (IT) leaders in DL practice.Results: The rtDashboard contributed significantly to monitoring hospital performance and influenced change in unit level decision making that was aligned with hospital goals. The rtDashboard implementation not only provided substantial performance improvement and quality benchmarking, but also changed the responsibility and accountability culture and helped the hospital put in practice DL principles to support future innovations.Conclusions: DL through rtDashboard is a demonstration of how a hospital can seek and strive for excellence. As much as dashboards are pivotal to organizational performance monitoring at the senior leadership level, the process used to diffuse it to every operational unit in support of a data-driven decision making culture showcases how hospital executives and IT leaders can work together to continually align and re-align their strategies to reach organizational goals – the core of DL practice.


2017 ◽  
Vol 8 (1) ◽  
pp. 117-146
Author(s):  
Silvano Tagliagambe

In 2008 Chris Anderson wrote a provocative piece titled The End of Theory. The idea being that we no longer need to abstract and hypothesis; we simply need to let machines lead us to the patterns, trends, and relationships in social, economic, political, and environmental relationships. According to Anderson, the new availability of huge amounts of data offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models and unified theories. But numbers, contrary to Anderson’s assertion, do not, in fact, speak for themselves. From the neuroscience’s standpoint, every choice we make is a reflection of an, often unstated, set of assumptions and hypotheses about what we want and expect from the data: no assertion, no prediction, no decision making is possible without an a priori opinion, without a project. Data-driven science essentially refers to the application of mathematics and technology on data to extract insights for problems, which are very clearly defined. In the real world, however, not all problems are such. To help solve them, one needs to understand and appreciate the context. The problem of landscape becomes, for this reason, critical and decisive. It requires an interdisciplinary approach consisting of several different competencies and skills.


Sign in / Sign up

Export Citation Format

Share Document