Thermochemical ablation modeling forward uncertainty analysis—Part I: Numerical methods and effect of model parameters

2017 ◽  
Vol 118 ◽  
pp. 497-509 ◽  
Author(s):  
Alessandro Turchi ◽  
Pietro M. Congedo ◽  
Thierry E. Magin
1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


Author(s):  
Saroj Kumar Jha ◽  
Sundar Rajan Krishnan ◽  
Kalyan Kumar Srinivasan

This paper presents simulated ignition delay (ID) results for diesel ignition in a pilot-ignited partially premixed, low temperature natural gas (NG) combustion engine. Lean premixed low temperature NG combustion was achieved using small pilot diesel sprays (2–3% of total fuel energy) injected over a range of injection timings (BOIs ∼ 20°–60° BTDC). Modeling IDs at advanced BOIs (50°–60° BTDC) presented unique challenges. In this study a single-component droplet evaporation model was used in conjunction with a modified version of the Shell autoignition (SAI) model to obtain ID predictions of pilot diesel over the range of BOIs (20°-60° BTDC). A detailed uncertainty analysis of several model parameters revealed that Aq and Eq, which affect chain initiation reactions, were the most important parameters (among a few others) for predicting IDs at very lean equivalence ratios. The ID model was validated (within ± 10 percent error) against experimentally measured IDs from a single-cylinder engine at 1700 rpm, BMEP = 6 bar, and intake manifold temperature (Tin) of 75°C. For BOIs close to TDC (e.g., 20° BTDC), the contribution of diesel evaporation times (Δθevap) and droplet diameters to predicted IDs were more significant compared to advanced BOIs (e.g., 60° BTDC). Increasing Tin (the most sensitive experimental input variable affecting predicted IDs), led to a reduction in both the physical and chemical components of ID. Hot EGR led to shorter predicted and measured IDs over the range of BOIs, except 20° BTDC. In general, the thermal effects of hot EGR were found to be more pronounced than either dilution or chemical effects for most BOIs. Finally, uncertainty analysis results also indicated that ID predictions were most sensitive to model parameters AP3, Aq, and Af1, and Eq, which affected chain initiation and propagation reactions and also contributed the most to overall uncertainties in IDs.


2021 ◽  
Author(s):  
Asfandiyar Bigeldiyev ◽  
Assem Batu ◽  
Aidynbek Berdibekov ◽  
Dmitry Kovyazin ◽  
Dmitry Sidorov ◽  
...  

Abstract The current work is intended to show the application of a multiple realization approach to produce a strategic development plan for one of the mines in Karaganda coal basin. The presented workflow suggests using a comprehensive reservoir simulator for a history matching process of a coal pillars on a detailed 3D grid and application of sensitivity and uncertainty analyses to produce probabilistic forecast. The suggested workflow significantly differs from the standard approaches previously implemented in the Karaganda Basin. First, a dynamic model has been constructed based on integrated algorithm of petrophysical interpretation and full cycle of geological modeling. Secondly, for the first time in the region, dynamic modeling has been performed via a combination of history matching to the observed degassing data and multiple realization uncertainty analysis. Thirdly, the described model parameters with defined range of uncertainty has been incorporated into the forecasting of degassing efficiency in the mine using different well completion technology. From the hydrodynamic modeling point of view, the coal seam gas (CSG) reservoir is presented as a dual porosity medium: a coal matrix containing adsorbed gas and a network of natural fractures (cleats), which are initially saturated with water. This approach has allowed the proper description of dynamic processes occurring in CSG reservoirs. The gas production from a coal is subject to gas diffusion in coal micropores, the degree of fracture intensity and fracture permeability. By tuning these parameters within reasonable ranges, we have been able to history match our model to the observed data. Moreover, application of an uncertainty analysis has resulted in a range of output parameters (P10, P50, and P90) that were historically observed. Performed full cycle of CSG dynamic modelling including history matching, sensitivity, and uncertainty analyses has been performed to create a robust model with the predictive power. Based on the obtained results, different optimization technologies have been simulated for fast and efficient degassing through a multiple realization probabilistic approach. The coal reservoir presented in this work is characterized by very low effective permeability and final degassing efficiency depends on well-reservoir contact surface. The decrease of the well spacing led to a proportional increase of gas recovery which is very similar to unconventional reservoirs. Therefore, vertical and horizontal wells with hydraulic fractures have been concluded the most efficient way to develop coal seams with low effective permeability in a secondary medium.


Author(s):  
Anna S. Astrakova ◽  
◽  
Dmitry Yu. Kushnir ◽  
Nikolay N. Velker ◽  
Gleb V. Dyatlov ◽  
...  

We propose an approach to inversion of induction LWD measurements based on calculation of the synthetic signals by artificial neural networks (ANN) specially trained on some database. The database for ANN training is generated by means of the proprietary 2D solver Pie2d. Validation of the proposed approach and estimation of computation time is performed for the problem of reconstruction of the three–layer model with a wall. Also, we make uncertainty analysis for the reconstructed model parameters for two tool configurations.


2012 ◽  
Vol 11 (1) ◽  
pp. 49-64
Author(s):  
P. D. Devika ◽  
P. A. Dinesh ◽  
G. Padmavathi ◽  
Rama Krishna Prasad

Mathematical modeling of chemical reactors is of immense interest and of enormous use in the chemical industries. The detailed modeling of heterogeneous catalytic systems is challenging because of the unknown nature of new catalytic material and also the transient behavior of such catalytic systems. The solution of mathematical models can be used to understand the interested physical systems. In addition, the solution can also be used to predict the unknown values which would have been otherwise obtained by conducting the actual experiments. Such solutions of the mathematical models involving ordinary/partial, linear/non-linear, differential/algebraic equations can be determined by using suitable analytical or numerical methods. The present work involves the development of mathematical methods and models to increase the understanding between the model parameters and also to decrease the number of laboratory experiments. In view of this, a detailed modeling of heterogeneous catalytic chemical reactor systems has been considered for the present study.


1997 ◽  
Vol 36 (5) ◽  
pp. 141-148 ◽  
Author(s):  
A. Mailhot ◽  
É. Gaume ◽  
J.-P. Villeneuve

The Storm Water Management Model's quality module is calibrated for a section of Québec City's sewer system using data collected during five rain events. It is shown that even for this simple model, calibration can fail: similarly a good fit between recorded data and simulation results can be obtained with quite different sets of model parameters, leading to great uncertainty on calibrated parameter values. In order to further investigate the lack of data and data uncertainty impacts on calibration, we used a new methodology based on the Metropolis Monte Carlo algorithm. This analysis shows that for a large amount of calibration data generated by the model itself, small data uncertainties are necessary to significantly decrease calibrated parameter uncertainties. This also confirms the usefulness of the Metropolis algorithm as a tool for uncertainty analysis in the context of model calibration.


2021 ◽  
pp. 0272989X2110098
Author(s):  
Fan Yang ◽  
Ana Duarte ◽  
Simon Walker ◽  
Susan Griffin

Cost-effectiveness analysis, routinely used in health care to inform funding decisions, can be extended to consider impact on health inequality. Distributional cost-effectiveness analysis (DCEA) incorporates socioeconomic differences in model parameters to capture how an intervention would affect both overall population health and differences in health between population groups. In DCEA, uncertainty analysis can consider the decision uncertainty around on both impacts (i.e., the probability that an intervention will increase overall health and the probability that it will reduce inequality). Using an illustrative example assessing smoking cessation interventions (2 active interventions and a “no-intervention” arm), we demonstrate how the uncertainty analysis could be conducted in DCEA to inform policy recommendations. We perform value of information (VOI) analysis and analysis of covariance (ANCOVA) to identify what additional evidence would add most value to the level of confidence in the DCEA results. The analyses were conducted for both national and local authority-level decisions to explore whether the conclusions about decision uncertainty based on the national-level estimates could inform local policy. For the comparisons between active interventions and “no intervention,” there was no uncertainty that providing the smoking cessation intervention would increase overall health but increase inequality. However, there was uncertainty in the direction of both impacts when comparing between the 2 active interventions. VOI and ANCOVA show that uncertainty in socioeconomic differences in intervention effectiveness and uptake contributes most to the uncertainty in the DCEA results. This suggests potential value of collecting additional evidence on intervention-related inequalities for this evaluation. We also found different levels of decision uncertainty between settings, implying that different types and levels of additional evidence are required for decisions in different localities.


2019 ◽  
Author(s):  
Sam Coveney ◽  
Richard H. Clayton

AbstractCardiac cell models reconstruct the action potential and calcium dynamics of cardiac myocytes, and are becoming widely used research tools. These models are highly detailed, with many parameters in the equations that describe current flow through ion channels, pumps, and exchangers in the cell membrane, and so it is difficult to link changes in model inputs to model behaviours. The aim of the present study was to undertake sensitivity and uncertainty analysis of two models of the human atrial action potential. We used Gaussian processes to emulate the way that 11 features of the action potential and calcium transient produced by each model depended on a set of. The emulators were trained by maximising likelihood conditional on a set of design data, obtained from 300 model evaluations. For each model evaluation, the set of inputs was obtained from uniform distributions centred on the default values for each parameter, using latin-hypercube sampling. First order and total effect sensitivity indices were calculated for each combination of input and output. First order indices were well correlated with the square root of sensitivity indices obtained by partial least squares regression of the design data. The sensitivity indices highlighted a difference in the balance of inward and outward currents during the plateau phase of the action potential in each model, with the consequence that changes to one parameter can have opposite effects in the two models. Overall the interactions among inputs were not as important as the first order effects, indicating that model parameters tend to have independent effects on the model outputs. This study has shown that Gaussian process emulators are an effective tool for sensitivity and uncertainty analysis of cardiac cell models.Author summaryThe time course of the cardiac action potential is determined by the balance of inward and outward currents across the cell membrane, and these in turn depend on dynamic behaviour of ion channels, pumps and exchangers in the cell membrane. Cardiac cell models reconstruct the action potential by representing transmembrane current as a set of stiff and nonlinear ordinary differential equations. These models capture biophysical detail, but are complex and have large numbers of parameters, so cause and effect relationships are difficult to identify. In recent years there has been an increasing interest in uncertainty and variability in computational models, and a number of tools have been developed. In this study we have used one of these tools, Gaussian process emulators, to compare and contrast two models of the human atrial action potential. We obtained sensitivity indices based on the proportion of variance in a model output that is accounted for by variance in each of the model parameters. These sensitivity indices highlighted the model parameters that had the most influence on the model outputs, and provided a means to make a quantitative comparison between the models.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 260
Author(s):  
Gabriella Bretti

Differential models, numerical methods and computer simulations play a fundamental role in applied sciences. Since most of the differential models inspired by real world applications have no analytical solutions, the development of numerical methods and efficient simulation algorithms play a key role in the computation of the solutions to many relevant problems. Moreover, since the model parameters in mathematical models have interesting scientific interpretations and their values are often unknown, estimation techniques need to be developed for parameter identification against the measured data of observed phenomena. In this respect, this Special Issue collects some important developments in different areas of application.


Sign in / Sign up

Export Citation Format

Share Document