scholarly journals Quantifying location error to define uncertainty in volcanic mass flow hazard simulations

2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.

2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


Author(s):  
V. L. Blinov ◽  
I. S. Zubkov ◽  
Yu. M. Brodov ◽  
B. E. Murmanskij

THE PURPOSE. To study the issues of air intake system’s performance as the part of the gas turbines. To estimate the possibility of modeling different performance factors of air intake systems with numerical simulation methods. To develop the recommendations of setting up the grid and the numerical models for researches in air intake system’s performance and assessing the technical condition of elements of it. METHODS. The main method, which was used during the whole study, is computational fluid dynamics with usage of CAE-systems.RESULTS. During the study the recommendations for setting up the numerical model were developed. Such factors as grid model parameters, roughness scale, pressure drop in elements of air intake system and some more were investigated. The method for heat exchanger’s performance simulation were created for modeling the air temperature raising. CONCLUSION. The air intake system’s performance analysis becomes one of the actual topics for research because of the high demands of gas turbines to air, which is used in its annulus. The main part of these researches is in analysis of dangerous regimes of work (e.g. the icing process of annulus elements) or in assessing technical condition of air intake systems and its influence to the gas turbine as a whole. The developed method of numerical simulation allows to get the adequate results with low requirements for computational resources. Also this method allows to model the heat exchanger performance and study its defects’ influence to the performance of air intake system as a whole. 


2016 ◽  
Vol 16 (10) ◽  
pp. 2195-2210 ◽  
Author(s):  
Luis A. Bastidas ◽  
James Knighton ◽  
Shaun W. Kline

Abstract. Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.


2019 ◽  
Vol 23 (10) ◽  
pp. 4011-4032 ◽  
Author(s):  
Rosanna A. Lane ◽  
Gemma Coxon ◽  
Jim E. Freer ◽  
Thorsten Wagener ◽  
Penny J. Johnes ◽  
...  

Abstract. Benchmarking model performance across large samples of catchments is useful to guide model selection and future model development. Given uncertainties in the observational data we use to drive and evaluate hydrological models, and uncertainties in the structure and parameterisation of models we use to produce hydrological simulations and predictions, it is essential that model evaluation is undertaken within an uncertainty analysis framework. Here, we benchmark the capability of several lumped hydrological models across Great Britain by focusing on daily flow and peak flow simulation. Four hydrological model structures from the Framework for Understanding Structural Errors (FUSE) were applied to over 1000 catchments in England, Wales and Scotland. Model performance was then evaluated using standard performance metrics for daily flows and novel performance metrics for peak flows considering parameter uncertainty. Our results show that lumped hydrological models were able to produce adequate simulations across most of Great Britain, with each model producing simulations exceeding a 0.5 Nash–Sutcliffe efficiency for at least 80 % of catchments. All four models showed a similar spatial pattern of performance, producing better simulations in the wetter catchments to the west and poor model performance in central Scotland and south-eastern England. Poor model performance was often linked to the catchment water balance, with models unable to capture the catchment hydrology where the water balance did not close. Overall, performance was similar between model structures, but different models performed better for different catchment characteristics and metrics, as well as for assessing daily or peak flows, leading to the ensemble of model structures outperforming any single structure, thus demonstrating the value of using multi-model structures across a large sample of different catchment behaviours. This research evaluates what conceptual lumped models can achieve as a performance benchmark and provides interesting insights into where and why these simple models may fail. The large number of river catchments included in this study makes it an appropriate benchmark for any future developments of a national model of Great Britain.


2008 ◽  
Vol 5 (6) ◽  
pp. 3517-3555 ◽  
Author(s):  
M. Herbst ◽  
H. V. Gupta ◽  
M. C. Casper

Abstract. Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.


2015 ◽  
Vol 3 (10) ◽  
pp. 6491-6534 ◽  
Author(s):  
L. A. Bastidas ◽  
J. Knighton ◽  
S. W. Kline

Abstract. Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.


2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Daniel Wade ◽  
Andrew Wilson ◽  
Abraham Reddy ◽  
Raj Bharadwaj

Data science techniques such as machine learning are rapidly becoming available to engineers building models from system data, such as aircraft operations data. These techniques require validation for use in fielded systems providing recommendations to operators or maintainers. The methods for validating and testing machine learned algorithms generally focus on model performance metrics such as accuracy or F1-score. Many aviation datasets are highly imbalanced, which can invalidate some underlying assumptions of machine learning models. Two simulations are performed to show how some common performance metrics respond to imbalanced populations. The results show that each performance metric responds differently to a sample depending on the imbalance ratio between two classes. The results indicate that traditional methods for repairing underlying imbalance in the sample may not provide the rigorous validation necessary in safety critical applications. The two simulations indicate that authorities must be cautious when mandating metrics for model acceptance criteria because they can significantly influence the model parameters.


2019 ◽  
Author(s):  
Rosanna A. Lane ◽  
Gemma Coxon ◽  
Jim E. Freer ◽  
Thorsten Wagener ◽  
Penny J. Johnes ◽  
...  

Abstract. Benchmarking model performance across large samples of catchments is useful to guide future model development. Given uncertainties in the observational data we use to drive and evaluate hydrological models, and uncertainties in the structure and parameterisation of models we use to produce hydrological simulations and predictions, it is essential that model evaluation is undertaken within an uncertainty analysis framework. Here, we benchmark the capability of multiple, lumped hydrological models across Great Britain, by focusing on daily flow and peak flow simulation. Four hydrological model structures from the Framework for Understanding Structural Errors (FUSE) were applied to over 1100 catchments. Model performance was then evaluated using a standard performance metric for daily flows, and more novel performance metrics for peak flows considering parameter uncertainty. Our results show that simple, lumped hydrological models were able to produce adequate simulations across most of Great Britain, with median Nash–Sutcliffe efficiency scores of 0.72–0.78 across all catchments. All four models showed a similar spatial pattern of performance, producing better simulations in the wetter catchments to the west, and poor model performance in Scotland and southeast England. Poor model performance was often linked to the catchment water balance, with models unable to capture the catchment hydrology where the water balance did not close. Overall, performance was similar between model structures, but different models performed better for different catchment characteristics and for assessing daily or peak flows, demonstrating the value of using an ensemble of model structures. This research demonstrates what conceptual lumped models can achieve as a performance benchmark, as well as providing interesting insights into where and why these simple models may fail. The large number of river catchments included in this study makes it an appropriate benchmark for any future developments of a national model of Great Britain.


2009 ◽  
Vol 13 (3) ◽  
pp. 395-409 ◽  
Author(s):  
M. Herbst ◽  
H. V. Gupta ◽  
M. C. Casper

Abstract. Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.


2020 ◽  
Author(s):  
Martin Mergili ◽  
Shiva P. Pudasaini

<p>Complex cascades of landslide processes in changing high-mountain areas have the potential to result in disasters with major loss of life and disruption of critical infrastructures. Simulation tools have been developed to anticipate and, consequently, more effectively manage, landslide hazards and risks. However, the detailed prediction of future events remains a major challenge particularly for complex cascading events. In the previous years, we have successfully back-calculated a set of well-documented historic landslide cascades with the mass flow simulation tool r.avaflow, deriving sets of optimized parameters. In the present contribution, we use the findings from these back-calculations to propose two approaches for predictive simulations with an updated version of r.avaflow, based on the multi-phase mass flow model by Pudasaini and Mergili (2019):</p><p>(i) Using the minima and maxima of the parameter sets summarized from the back-calculations to simulate areas of certain impact and areas of possible impact, and ranges of possible travel times and kinetic energies. The limitation of this method is that parameters often depend on the process magnitude and have to be spatially differentiated for zones of similar topography and process type, meaning that the process type has to be prescribed.</p><p>(ii) Deducing from the guiding parameter set a function that relates the key model parameters (particularly, friction parameters) to a suitable dynamic flow parameter (we suggest the kinetic energy). This approach has the advantage that the definition of zones becomes obsolete. However, much more research is necessary to constrain the proposed function.</p><p>We apply both approaches to the well-documented 2002 Kolka-Karmadon event in the Russian Caucasus, where an initial fall of ice and rock entrained almost an entire glacier, triggering a high-energy ice-rock avalanche followed by a distal mud flow. Both of the simulations (i) and (ii) yield empirically mostly adequate results in terms of impact areas, volumes, hydrographs, and flow velocities, leading to the preliminary conclusion that they represent a major step forward in our ability to predict high-mountain process chains. However, some aspects are not fully reproduced by (i), whereas others are not fully reproduced by (ii), calling for further research.</p><div> <div>Pudasaini, S. P. and  Mergili, M. (2019): Journal of Geophysical Research – Earth Surface, doi:10.1029/2019JF005204</div> <div> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document