bayesian formulation
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 19)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
Vol 9 ◽  
Author(s):  
Kyle T. Spikes ◽  
Mrinal K. Sen

Correlations of rock-physics model inputs are important to know to help design informative prior models within integrated reservoir-characterization workflows. A Bayesian framework is optimal to determine such correlations. Within that framework, we use velocity and porosity measurements on unconsolidated, dry, and clean sands. Three pressure- and three porosity-dependent rock-physics models are applied to the data to examine relationships among the inputs. As with any Bayesian formulation, we define a prior model and calculate the likelihood in order to evaluate the posterior. With relatively few inputs to consider for each rock-physics model, we found that sampling the posterior exhaustively to be convenient. The results of the Bayesian analyses are multivariate histograms that indicate most likely values of the input parameters given the data to which the rock-physics model was fit. When the Bayesian procedure is repeated many times for the same data, but with different prior models, correlations emerged among the input parameters in a rock-physics model. These correlations were not known previously. Implications, for the pressure- and porosity-dependent models examined here, are that these correlations should be utilized when applying these models to other relevant data sets. Furthermore, additional rock-physics models should be examined similarly to determine any potential correlations in their inputs. These correlations can then be taken advantage of in forward and inverse problems posed in reservoir characterization.


Energies ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 6410
Author(s):  
Radu Lupu ◽  
Adrian Cantemir Călin ◽  
Cristina Georgiana Zeldea ◽  
Iulia Lupu

In this article, we aim to study systemic risk spillovers for European energy companies and to determine the spillover network of the energy sector with other economic sectors. To examine the spillovers within the energy sector, we employ three systemic risk measures. We then embed the results of these models into a Diebold–Yilmaz framework. Moreover, we consider an entropy procedure to extract a Bayesian formulation of its systemic risk spillover. This allows us to determine which company in our sample contributes the most to systemic risk, which company is the most vulnerable to systemic risk, and the place of the energy sector within risk networks. Our results reveal the fact that all companies manifest enhanced spillovers during 2008, early 2009, and 2020. These episodes are associated with the dynamics of the global financial crisis and the pandemic crisis. We notice that specific companies are risk drivers in the sector in both times of market turbulence and calm. Lastly, we observe that several economic sectors such as banks, capital goods, consumer services, and diversified financials generate relevant spillovers towards the energy sector.


Author(s):  
Sarath Sreedharan ◽  
Anagha Kulkarni ◽  
David Smith ◽  
Subbarao Kambhampati

Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation. Further, these measures have been studied under differing assumptions, thus precluding the possibility of designing a single framework that captures these measures under the same assumptions. In this paper, we present a unifying Bayesian framework that models a human observer's evolving beliefs about an agent and thereby define the problem of Generalized Human-Aware Planning. We will show that the definitions of interpretability measures like explicability, legibility and predictability from the prior literature fall out as special cases of our general framework. Through this framework, we also bring a previously ignored fact to light that the human-robot interactions are in effect open-world problems, particularly as a result of modeling the human's beliefs over the agent. Since the human may not only hold beliefs unknown to the agent but may also form new hypotheses about the agent when presented with novel or unexpected behaviors.


Author(s):  
William Bains ◽  
Janusz Jurand Petkowski

Abstract The search for biosignatures is likely to generate controversial results, with no single biosignature being clear proof of the presence of life. Bayesian statistical frameworks have been suggested as a tool for testing the effect that a new observation has on our belief in the presence of life on another planet. We test this approach here using the tentative discovery of phosphine on Venus as an example of a possible detection of a biosignature on an otherwise well-characterized planet. We report on a survey of astrobiologists' views on the likelihood of life on Enceladus, Europa, Mars, Titan and Venus before the announcement of the detection of phosphine in Venus' atmosphere (the Bayesian Prior Probability) and after the announcement (the Posterior Probability). Survey results show that respondents have a general view on the likelihood of life on any world, independent of the relative ranking of specific bodies, and that there is a distinct ‘fans of icy moons’ sub-community. The announcement of the potential presence of phosphine on Venus resulted in the community showing a small but significant increase in its confidence that there was life on Venus; nevertheless the community still considers Venus to be the least likely abode of life among the five targets considered, last after Titan. We derive a Bayesian formulation that explicitly includes both the uncertainty in the interpretation of the signal as well as uncertainty in whether phosphine on Venus could have been produced by life. We show that although the community has shown rational restraint about a highly unexpected and still tentative detection, their changing expectations do not fit a Bayesian model.


2020 ◽  
Author(s):  
Eleanor Brower Schille-Hudson ◽  
David Landy

Demographic perception—the perception of social quantities of geopolitical scale and social significance—has been extensivelystudied in cognitive and political science (Citrin & Sides, 2008; Gilens, 2001; Herda, 2013). Regular patterns of over- and under-estimation emerge. Americans greatly overestimate, for instance, the proportion of citizens that identify as gay or Muslim, while underestimating those that are Christian. While these errors have been attributed to social factors such as fear of specific minorities (Gallagher, 2003; Wong, 2007), other work has suggested that these patterns result from the psychophysics of the perception of proportions (Landy, Guay & Marghetis 2018). A Bayesian formulation suggests that biases in the estimation of both social proportions and simple visual properties result from a common source: ‘hedging’ uncertain information toward a prior. Here we present a novel lab paradigm and two experiments that manipulate uncertainty in a simple (dot estimation) task, verifying the core assumptions of the Bayesian approach.


2020 ◽  
Author(s):  
Andrea Licciardi ◽  
Kerry Gallagher ◽  
Stephen Anthony Clark

<p>Vitrinite reflectance and apatite fission track) and borehole data (bottom hole temperature and porosi ty) for thermal history reconstruction in basin modeling.  The approach implements a trans-dimensional and hierarchical Bayesian formulation with a reversible jump Markov chain Monte  Carlo (rjMcMC) algorithm. The main objective of the inverse problem is to infer the heat flow history below a borehole given the data and a set of geological constraints (e.g. stratigraphy , burial histories and physical properties of the sediments).  The algorithm incorporat es an adaptive, data-driven parametrization of the heat flow history, and allows for automatic estimation of relative importance of each data type in the inversion and for robust quantification of parameter uncertainties and trade-offs. In addition, the algorithm deals with uncertainties on the imposed geological constraints in two ways. First, the amount of erosion and timing of an erosional event are explicitly treated as independent parameters to be inferred from the data. Second, uncertainties on compaction parameters and surface temperature histo ry are directly propagated <br>into the final probabilistic solution.</p>


2020 ◽  
Vol 221 (3) ◽  
pp. 1750-1764
Author(s):  
Philip Blom ◽  
Garrett Euler ◽  
Omar Marcillo ◽  
Fransiska Dannemann Dugick

SUMMARY A Bayesian framework for the association of infrasonic detections is presented and evaluated for analysis at regional propagation scales. A pair-based, joint-likelihood association approach is developed that identifies events by computing the probability that individual detection pairs are attributable to a hypothetical common source and applying hierarchical clustering to identify events from the pair-based analysis. The framework is based on a Bayesian formulation introduced for infrasonic source localization and utilizes the propagation models developed for that application with modifications to improve the numerical efficiency of the analysis. Clustering analysis is completed using hierarchical analysis via weighted linkage for a non-Euclidean distance matrix defined by the negative log-joint-likelihood values. The method is evaluated using regional synthetic data with propagation distances of hundreds of kilometres in order to study the sensitivity of the method to uncertainties and errors in backazimuth and time of arrival. The method is found to be robust and stable for typical uncertainties, able to effectively distinguish noise detections within the data set from those in events, and can be made numerically efficient due to its ease of parallelization.


2020 ◽  
Vol 29 (9) ◽  
pp. 2445-2469
Author(s):  
Brandon Koch ◽  
David M Vock ◽  
Julian Wolfson ◽  
Laura Boehm Vock

Unbiased estimation of causal effects with observational data requires adjustment for confounding variables that are related to both the outcome and treatment assignment. Standard variable selection techniques aim to maximize predictive ability of the outcome model, but they ignore covariate associations with treatment and may not adjust for important confounders weakly associated to outcome. We propose a novel method for estimating causal effects that simultaneously considers models for both outcome and treatment, which we call the bilevel spike and slab causal estimator (BSSCE). By using a Bayesian formulation, BSSCE estimates the posterior distribution of all model parameters and provides straightforward and reliable inference. Spike and slab priors are used on each covariate coefficient which aim to minimize the mean squared error of the treatment effect estimator. Theoretical properties of the treatment effect estimator are derived justifying the prior used in BSSCE. Simulations show that BSSCE can substantially reduce mean squared error over numerous methods and performs especially well with large numbers of covariates, including situations where the number of covariates is greater than the sample size. We illustrate BSSCE by estimating the causal effect of vasoactive therapy vs. fluid resuscitation on hypotensive episode length for patients in the Multiparameter Intelligent Monitoring in Intensive Care III critical care database.


Sign in / Sign up

Export Citation Format

Share Document