Uncertainty Quantification Using Parameter Space Partitioning

Author(s):  
Ye Tao ◽  
Francesco Ferranti ◽  
Michel S. Nakhla
2008 ◽  
Vol 32 (8) ◽  
pp. 1285-1303 ◽  
Author(s):  
Mark A. Pitt ◽  
Jay I. Myung ◽  
Maximiliano Montenegro ◽  
James Pooley

2013 ◽  
Vol 19 (9) ◽  
pp. 1499-1512 ◽  
Author(s):  
S. Bergner ◽  
M. Sedlmair ◽  
T. Moller ◽  
S. N. Abdolyousefi ◽  
A. Saad

2006 ◽  
Vol 113 (1) ◽  
pp. 57-83 ◽  
Author(s):  
Mark A. Pitt ◽  
Woojae Kim ◽  
Daniel J. Navarro ◽  
Jay I. Myung

2016 ◽  
Vol 24 (2) ◽  
pp. 617-631 ◽  
Author(s):  
Sara Steegen ◽  
Francis Tuerlinckx ◽  
Wolf Vanpaemel

2019 ◽  
Author(s):  
Mark A. Pitt ◽  
Woojae Kim ◽  
Danielle Navarro ◽  
Jay I. Myung

To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g., interval-scale performance) often mismatches their testable precision in experiments, where qualitative, ordinal predictions are the norm. Parameter space partitioning is a solution that evaluates model performance at a qualitative level. There exists a partition on the model's parameter space that divides it into regions that correspond to each data pattern. Three application examples demonstrate its potential and versatility for studying the global behavior of psychological models.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-37
Author(s):  
Hans Walter Behrens ◽  
K. Selçuk Candan ◽  
Xilun Chen ◽  
Yash Garg ◽  
Mao-Lin Li ◽  
...  

Urban systems are characterized by complexity and dynamicity. Data-driven simulations represent a promising approach in understanding and predicting complex dynamic processes in the presence of shifting demands of urban systems. Yet, today’s silo-based, de-coupled simulation engines fail to provide an end-to-end view of the complex urban system, preventing informed decision-making. In this article, we present DataStorm to support integration of existing simulation, analysis and visualization components into integrated workflows. DataStorm provides a flow engine, DataStorm-FE , for coordinating data and decision flows among multiple actors (each representing a model, analytic operation, or a decision criterion) and enables ensemble planning and optimization across cloud resources. DataStorm provides native support for simulation ensemble creation through parameter space sampling to decide which simulations to run, as well as distributed instantiation and parallel execution of simulation instances on cluster resources. Recognizing that simulation ensembles are inherently sparse relative to the potential parameter space, we also present a density-boosting partition-stitch sampling scheme to increase the effective density of the simulation ensemble through a sub-space partitioning scheme, complemented with an efficient stitching mechanism that leverages partial and imperfect knowledge from partial dynamical systems to effectively obtain a global view of the complex urban process being simulated.


2021 ◽  
Author(s):  
Emily A. Hill ◽  
Sebastian H. R. Rosier ◽  
G. Hilmar Gudmundsson ◽  
Matthew Collins

Abstract. The future of the Antarctic Ice Sheet in response to climate warming is one of the largest sources of uncertainty in estimates of future changes in global mean sea level (∆GMSL). Mass loss is currently concentrated in regions of warm circumpolar deep water, but it is unclear how ice shelves currently surrounded by relatively cold ocean waters will respond to climatic changes in the future. Studies suggest that warm water could flush the Filchner-Ronne (FR) ice shelf cavity during the 21st century, but the inland ice sheet response to a drastic increase in ice shelf melt rates, is poorly known. Here, we use an ice flow model and uncertainty quantification approach to project the GMSL contribution of the FR basin under RCP emissions scenarios, and assess the forward propagation and proportional contribution of uncertainties in model parameters (related to ice dynamics, and atmospheric/oceanic forcing) on these projections. Our probabilistic projections, derived from an extensive sample of the parameter space using a surrogate model, reveal that the FR basin is unlikely to contribute positively to sea level rise by the 23rd century. This is primarily due to the mitigating effect of increased accumulation with warming, which is capable of suppressing ice loss associated with ocean–driven increases in sub-shelf melt. Mass gain (negative ∆GMSL) from the FR basin increases with warming, but uncertainties in these projections also become larger. In the highest emission scenario RCP 8.5, ∆GMSL is likely to range from −103 to 26 mm, and this large spread can be apportioned predominantly to uncertainties in parameters driving increases in precipitation (30 %) and sub-shelf melting (44 %). There is potential, within the bounds of our input parameter space, for major collapse and retreat of ice streams feeding the FR ice shelf, and a substantial positive contribution to GMSL (up to approx. 300 mm), but we consider such a scenario to be very unlikely. Adopting uncertainty quantification techniques in future studies will help to provide robust estimates of potential sea level rise and further identify target areas for constraining projections.


SPE Journal ◽  
2009 ◽  
Vol 15 (01) ◽  
pp. 31-38 ◽  
Author(s):  
Linah Mohamed ◽  
Mike Christie ◽  
Vasily Demyanov

Summary History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir (e.g., from analog outcrop data) and update our reservoir models with observations (e.g., from production data or time-lapse seismic). The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, and the ensemble Kalman filter (EnKF). This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm, and the Neighbourhood Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched models. The algorithms are compared by generating multiple history- matched reservoir models and comparing the Bayesian credible intervals (p10-p50-p90) produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example but that some algorithms are able to find good fitting models quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10-p50-p90 uncertainty envelopes in forecast oil rate. These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.


Sign in / Sign up

Export Citation Format

Share Document