scholarly journals Multilevel and Quasi Monte Carlo methods for the calculation of the Expected Value of Partial Perfect Information

Author(s):  
Wei Fang ◽  
Zhenru Wang ◽  
Mike B Giles ◽  
Christopher H Jackson ◽  
Nicky J Welton ◽  
...  

The expected value of partial perfect information (EVPPI) provides an upper bound on the value of collecting further evidence on a set of inputs to a cost-effectiveness decision model. Standard Monte Carlo (MC) estimation of EVPPI is computationally expensive as it requires nested simulation. Alternatives based on regression approximations to the model have been developed, but are not practicable when the number of uncertain parameters of interest is large and when parameter estimates are highly correlated. The error associated with the regression approximation is difficult to determine, while MC allows the bias and precision to be controlled. In this paper, we explore the potential of Quasi Monte-Carlo (QMC) and Multilevel Monte-Carlo (MLMC) estimation to reduce computational cost of estimating EVPPI by reducing the variance compared with MC, while preserving accuracy. In this paper, we develop methods to apply QMC and MLMC to EVPPI, addressing particular challenges that arise where Markov Chain Monte Carlo (MCMC) has been used to estimate input parameter distributions. We illustrate the methods using a two examples: a simplified decision tree model for treatments for depression, and a complex Markov model for treatments to prevent stroke in atrial fibrillation, both of which use MCMC inputs. We compare the performance of QMC and MLMC with MC and the approximation techniques of Generalised Additive Model regression (GAM), Gaussian process regression (GP), and Integrated Nested Laplace Approximations (INLA-GP). We found QMC and MLMC to offer substantial computational savings when parameter sets are large and correlated, and when the EVPPI is large. We also find GP and INLA-GP to be biased in those situations, while GAM cannot estimate EVPPI for large parameter sets.

2021 ◽  
pp. 0272989X2110263
Author(s):  
Wei Fang ◽  
Zhenru Wang ◽  
Michael B. Giles ◽  
Chris H. Jackson ◽  
Nicky J. Welton ◽  
...  

The expected value of partial perfect information (EVPPI) provides an upper bound on the value of collecting further evidence on a set of inputs to a cost-effectiveness decision model. Standard Monte Carlo estimation of EVPPI is computationally expensive as it requires nested simulation. Alternatives based on regression approximations to the model have been developed but are not practicable when the number of uncertain parameters of interest is large and when parameter estimates are highly correlated. The error associated with the regression approximation is difficult to determine, while MC allows the bias and precision to be controlled. In this article, we explore the potential of quasi Monte Carlo (QMC) and multilevel Monte Carlo (MLMC) estimation to reduce the computational cost of estimating EVPPI by reducing the variance compared with MC while preserving accuracy. We also develop methods to apply QMC and MLMC to EVPPI, addressing particular challenges that arise where Markov chain Monte Carlo (MCMC) has been used to estimate input parameter distributions. We illustrate the methods using 2 examples: a simplified decision tree model for treatments for depression and a complex Markov model for treatments to prevent stroke in atrial fibrillation, both of which use MCMC inputs. We compare the performance of QMC and MLMC with MC and the approximation techniques of generalized additive model (GAM) regression, Gaussian process (GP) regression, and integrated nested Laplace approximations (INLA-GP). We found QMC and MLMC to offer substantial computational savings when parameter sets are large and correlated and when the EVPPI is large. We also found that GP and INLA-GP were biased in those situations, whereas GAM cannot estimate EVPPI for large parameter sets.


2017 ◽  
Vol 4 (8) ◽  
pp. 170203 ◽  
Author(s):  
D. Crevillén-García ◽  
H. Power

In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.


2012 ◽  
Vol 12 (4) ◽  
pp. 1051-1069 ◽  
Author(s):  
Juarez dos Santos Azevedo ◽  
Saulo Pomponet Oliveira

AbstractQuasi-Monte Carlo methods and stochastic collocation methods based on sparse grids have become popular with solving stochastic partial differential equations. These methods use deterministic points for multi-dimensional integration or interpolation without suffering from the curse of dimensionality. It is not evident which method is best, specially on random models of physical phenomena. We numerically study the error of quasi-Monte Carlo and sparse grid methods in the context of ground-water flow in heterogeneous media. In particular, we consider the dependence of the variance error on the stochastic dimension and the number of samples/collocation points for steady flow problems in which the hydraulic conductivity is a lognormal process. The suitability of each technique is identified in terms of computational cost and error tolerance.


Author(s):  
Mohammad Nezhadali ◽  
Tuhin Bhakta ◽  
Kristian Fossum ◽  
Trond Mannseth

With large amounts of simultaneous data, like inverted seismic data in reservoir modeling, negative effects of Monte Carlo errors in straightforward ensemble-based data assimilation (DA) are enhanced, typically resulting in underestimation of parameter uncertainties. Utilization of lower fidelity reservoir simulations reduces the computational cost per ensemble member, thereby rendering the possibility of increasing the ensemble size without increasing the total computational cost. Increasing the ensemble size will reduce Monte Carlo errors and therefore benefit DA results. The use of lower fidelity reservoir models will however introduce modeling errors in addition to those already present in conventional fidelity simulation results. Multilevel simulations utilize a selection of models for the same entity that constitute hierarchies both in fidelities and computational costs. In this work, we estimate and approximately account for the multilevel modeling error (MLME), that is, the part of the total modeling error that is caused by using a multilevel model hierarchy, instead of a single conventional model to calculate model forecasts. To this end, four computationally inexpensive approximate MLME correction schemes are considered, and their abilities to correct the multilevel model forecasts for reservoir models with different types of MLME are assessed. The numerical results show a consistent ranking of the MLME correction schemes. Additionally, we assess the performances of the different MLME-corrected model forecasts in assimilation of inverted seismic data. The posterior parameter estimates from multilevel DA with and without MLME correction are compared to results obtained from conventional single-level DA with localization. It is found that multilevel DA (MLDA) with and without MLME correction outperforms conventional DA with localization. The use of all four MLME correction schemes results in posterior parameter estimates with similar quality. Results obtained with MLDA without any MLME correction were also of similar quality, indicating some robustness of MLDA toward MLME.


Author(s):  
Lorella Palluotto ◽  
Nicolas Dumont ◽  
Pedro Rodrigues ◽  
Chai Koren ◽  
Ronan Vicquelin ◽  
...  

The present work assesses different Monte Carlo methods in radiative heat transfer problems, in terms of accuracy and computational cost. Achieving a high scalability on numerous CPUs with the conventional forward Monte Carlo method is not straightforward. The Emission-based Reciprocity Monte Carlo Method (ERM) allows to treat each mesh point independently from the others with a local monitoring of the statistical error, becoming a perfect candidate for high-scalability. ERM is however penalized by a slow statistical convergence in cold absorbing regions. This limitation has been overcome by an Optimized ERM (OERM) using a frequency distribution function based on the emission distribution at the maximum temperature of the system. Another approach to enhance the convergence is the use of low-discrepancy sampling. The obtained Quasi-Monte Carlo method is combined with OERM. The efficiency of the considered Monte-Carlo methods are compared.


2007 ◽  
Vol 27 (4) ◽  
pp. 448-470 ◽  
Author(s):  
Alan Brennan ◽  
Samer Kharroubi ◽  
Anthony O'Hagan ◽  
Jim Chilcott

Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2328
Author(s):  
Mohammed Alzubaidi ◽  
Kazi N. Hasan ◽  
Lasantha Meegahapola ◽  
Mir Toufikur Rahman

This paper presents a comparative analysis of six sampling techniques to identify an efficient and accurate sampling technique to be applied to probabilistic voltage stability assessment in large-scale power systems. In this study, six different sampling techniques are investigated and compared to each other in terms of their accuracy and efficiency, including Monte Carlo (MC), three versions of Quasi-Monte Carlo (QMC), i.e., Sobol, Halton, and Latin Hypercube, Markov Chain MC (MCMC), and importance sampling (IS) technique, to evaluate their suitability for application with probabilistic voltage stability analysis in large-scale uncertain power systems. The coefficient of determination (R2) and root mean square error (RMSE) are calculated to measure the accuracy and the efficiency of the sampling techniques compared to each other. All the six sampling techniques provide more than 99% accuracy by producing a large number of wind speed random samples (8760 samples). In terms of efficiency, on the other hand, the three versions of QMC are the most efficient sampling techniques, providing more than 96% accuracy with only a small number of generated samples (150 samples) compared to other techniques.


Author(s):  
Stephan Schlupkothen ◽  
Gerd Ascheid

Abstract The localization of multiple wireless agents via, for example, distance and/or bearing measurements is challenging, particularly if relying on beacon-to-agent measurements alone is insufficient to guarantee accurate localization. In these cases, agent-to-agent measurements also need to be considered to improve the localization quality. In the context of particle filtering, the computational complexity of tracking many wireless agents is high when relying on conventional schemes. This is because in such schemes, all agents’ states are estimated simultaneously using a single filter. To overcome this problem, the concept of multiple particle filtering (MPF), in which an individual filter is used for each agent, has been proposed in the literature. However, due to the necessity of considering agent-to-agent measurements, additional effort is required to derive information on each individual filter from the available likelihoods. This is necessary because the distance and bearing measurements naturally depend on the states of two agents, which, in MPF, are estimated by two separate filters. Because the required likelihood cannot be analytically derived in general, an approximation is needed. To this end, this work extends current state-of-the-art likelihood approximation techniques based on Gaussian approximation under the assumption that the number of agents to be tracked is fixed and known. Moreover, a novel likelihood approximation method is proposed that enables efficient and accurate tracking. The simulations show that the proposed method achieves up to 22% higher accuracy with the same computational complexity as that of existing methods. Thus, efficient and accurate tracking of wireless agents is achieved.


Sign in / Sign up

Export Citation Format

Share Document