Estimation in Parallel Randomized Experiments

1981 ◽  
Vol 6 (4) ◽  
pp. 377-401 ◽  
Author(s):  
Donald B. Rubin

Many studies comparing new treatments to standard treatments consist of parallel randomized experiments. In the example considered here, randomized experiments were conducted in eight schools to determine the effectiveness of special coaching programs for the SAT. The purpose here is to illustrate Bayesian and empirical Bayesian techniques that can be used to help summarize the evidence in such data about differences among treatments, thereby obtaining improved estimates of the treatment effect in each experiment, including the one having the largest observed effect. Three main tools are illustrated: 1) graphical techniques for displaying sensitivity within an empirical Bayes framework, 2) simple simulation techniques for generating Bayesian posterior distributions of individual effects and the largest effect, and 3) methods for monitoring the adequacy of the Bayesian model specification by simulating the posterior predictive distribution in hypothetical replications of the same treatments in the same eight schools.

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
A. Corberán-Vallet ◽  
F. J. Santonja ◽  
M. Jornet-Sanz ◽  
R.-J. Villanueva

We present a Bayesian stochastic susceptible-exposed-infectious-recovered model in discrete time to understand chickenpox transmission in the Valencian Community, Spain. During the last decades, different strategies have been introduced in the routine immunization program in order to reduce the impact of this disease, which remains a public health’s great concern. Under this scenario, a model capable of explaining closely the dynamics of chickenpox under the different vaccination strategies is of utter importance to assess their effectiveness. The proposed model takes into account both heterogeneous mixing of individuals in the population and the inherent stochasticity in the transmission of the disease. As shown in a comparative study, these assumptions are fundamental to describe properly the evolution of the disease. The Bayesian analysis of the model allows us to calculate the posterior distribution of the model parameters and the posterior predictive distribution of chickenpox incidence, which facilitates the computation of point forecasts and prediction intervals.


Author(s):  
Ryota Wada ◽  
Takuji Waseda

Extreme value estimation of significant wave height is essential for designing robust and economically efficient ocean structures. But in most cases, the duration of observational wave data is not efficient to make a precise estimation of the extreme value for the desired period. When we focus on hurricane dominated oceans, the situation gets worse. The uncertainty of the extreme value estimation is the main topic of this paper. We use Likelihood-Weighted Method (LWM), a method that can quantify the uncertainty of extreme value estimation in terms of aleatory and epistemic uncertainty. We considered the extreme values of hurricane-dominated regions such as Japan and Gulf of Mexico. Though observational data is available for more than 30 years in Gulf of Mexico, the epistemic uncertainty for 100-year return period value is notably large. Extreme value estimation from 10-year duration of observational data, which is a typical case in Japan, gave a Coefficient of Variance of 43%. This may have impact on the design rules of ocean structures. Also, the consideration of epistemic uncertainty gives rational explanation for the past extreme events, which were considered as abnormal. Expected Extreme Value distribution (EEV), which is the posterior predictive distribution, defined better extreme values considering the epistemic uncertainty.


2002 ◽  
Vol 12 (05) ◽  
pp. 369-379
Author(s):  
J. SVENSSON

A training algorithm is introduced that takes into account a priori known errors on both inputs and outputs in an MLP network. The new cost function introduced for this case is based on a linear approximation of the network function over the input distribution for a given input pattern. Update formulas, in the form of the gradient of the new cost function, is given for a MLP network, together with expressions for the Hessian matrix. This is later used to calculate error bars in a Bayesian framework. The error bars thus derived are discussed in relation to the more commonly used width of the target posterior predictive distribution. It will also be shown that the taking into account of known input uncertainties in the way suggested in this article will have a strong regularizing effect on the solution.


2013 ◽  
Vol 4 (4) ◽  
pp. 25-32
Author(s):  
V. S Zadionchenko ◽  
G. G Shehyan ◽  
A. A Yalymov

Despite the facts, revealing the mechanisms of pathogenesis and enhancing the effectiveness of the treatment of cardiovascular disease (CVD), the latter continues to be the leading cause of death and disability in the population. In this regard, the search for new treatments for CVD remains the most relevant in modern cardiology. In the treatment of CVD many classes of drugs are used, among which are calcium antagonists (AA). This class of drugs has been successfully used in the treatment of patients with arterial hypertension (AH) and coronary heart disease [1-3, 5].CA are a heterogeneous group of drugs that have the similar actionmechanism, but differ in a number of properties, including pharmacokinetics, tissue selectivity, effect on heart rate, etc. The main feature of all is the ability of CA to reversibly inhibit calcium current through the slow calcium channels. These funds are used in cardiology from the end of the 1960s and have since become so widely popular that in most developed countries they hold one of the first places on the prescription rate of drugs used for the treatment of cardiovascular disease. This is due, on the one hand, to the CA high clinical efficacy, and on the other to a relatively small number of contraindications to their purpose and the comparatively small number of side effects. [1, 2, 4, 5].


2019 ◽  
Author(s):  
Donald Ray Williams ◽  
Philippe Rast ◽  
Luis Pericchi ◽  
Joris Mulder

Gaussian graphical models are commonly used to characterize conditional independence structures (i.e., networks) of psychological constructs. Recently attention has shifted from estimating single networks to those from various sub-populations. The focus is primarily to detect differences or demonstrate replicability. We introduce two novel Bayesian methods for comparing networks that explicitly address these aims. The first is based on the posterior predictive distribution, with Kullback-Leibler divergence as the discrepancy measure, that tests differences between two multivariate normal distributions. The second approach makes use of Bayesian model selection, with the Bayes factor, and allows for gaining evidence for invariant network structures. This overcomes limitations of current approaches in the literature that use classical hypothesis testing, where it is only possible to determine whether groups are significantly different from each other. With simulation we show the posterior predictive method is approximately calibrated under the null hypothesis ($\alpha = 0.05$) and has more power to detect differences than alternative approaches. We then examine the necessary sample sizes for detecting invariant network structures with Bayesian hypothesis testing, in addition to how this is influenced by the choice of prior distribution. The methods are applied to post-traumatic stress disorder symptoms that were measured in four groups. We end by summarizing our major contribution, that is proposing two novel methods for comparing GGMs, which extends beyond the social-behavioral sciences. The methods have been implemented in the R package BGGM.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2921
Author(s):  
Stefano Cabras

This work proposes a semi-parametric approach to estimate the evolution of COVID-19 (SARS-CoV-2) in Spain. Considering the sequences of 14-day cumulative incidence of all Spanish regions, it combines modern Deep Learning (DL) techniques for analyzing sequences with the usual Bayesian Poisson-Gamma model for counts. The DL model provides a suitable description of the observed time series of counts, but it cannot give a reliable uncertainty quantification. The role of expert elicitation of the expected number of counts and its reliability is DL predictions’ role in the proposed modelling approach. Finally, the posterior predictive distribution of counts is obtained in a standard Bayesian analysis using the well known Poisson-Gamma model. The model allows to predict the future evolution of the sequences on all regions or estimates the consequences of eventual scenarios.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

While one of the most common uses of Bayes’ Theorem is in the statistical analysis of a dataset (i.e., statistical modeling), this chapter examines another application of Gibbs sampling: parameter estimation for simple linear regression. In the “Survivor Problem,” the chapter considers the relationship between how many days a contestant lasts in a reality-show competition as a function of how many years of formal education they have. This chapter is a bit more complicated than the previous chapter because it involves estimation of the joint posterior distribution of three parameters. As in earlier chapters, the estimation process is described in detail on a step-by-step basis. Finally, the posterior predictive distribution is estimated and discussed. By the end of the chapter, the reader will have a firm understanding of the following concepts: linear equation, sums of squares, posterior predictive distribution, and linear regression with Markov Chain Monte Carlo and Gibbs sampling.


2005 ◽  
Author(s):  
Tae-Uk Kim ◽  
In Hee Hwang ◽  
Hyo-Chol Sin

Optimal design of composite laminates with uncertain in-plane loadings and material properties is considered. The stacking sequence is designed to have maximum buckling load based on anti-optimization approach. To consider the above-mentioned uncertain properties, the convex modeling and Monte Carlo simulation techniques are used in calculating objective function. For the stacking sequence optimization, it is used the modified genetic algorithm which handles the discrete ply angles and the constraints easily. Numerical results are given for rectangular laminates of various aspect ratios. The optimal solutions from the deterministic and the stochastic cases are obtained and it is demonstrated the importance of considering uncertainty. The buckling load carried by a deterministic design is much less than the one carried by a design uncertainty considered when both are subjected to uncertain loads. Also, it is examined the effects of the method for considering uncertainty on the optimization process in the light of computational efficiency and reliability of solutions obtained.


2019 ◽  
Author(s):  
Alexander Olof Savi

Picture education as a long chain of interventions in a self-organizing developmental system. On the one extreme, such educational sequences can be identical for each and every student, whereas on the other extreme, each sequence may be perfectly tailored to the individual. The latter is what is meant with idiographic education. All educational programs can be seen to lie somewhere in between those extremes, and in this book, methods are explored that may help increase the tailoring of education.The book covers advances in three fundamental approaches. First, it discusses and illustrates an experimental approach: online randomized experiments, so-called A/B tests, that enable truly double-blind evidence-based educational improvements. Second, it introduces a diagnostic approach: a scalable method that helps identify students’ misconceptions. Third and finally, it introduces a theoretical approach: a formal conceptualization of intelligence that permits a novel educational, developmental, and individual perspective, and that may justify and ultimately guide the tailoring of education.


Sign in / Sign up

Export Citation Format

Share Document