informative prior distributions
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Camila Ferreira Azevedo ◽  
Cynthia Barreto ◽  
Matheus Suela ◽  
Moysés Nascimento ◽  
Antônio Carlos Júnior ◽  
...  

Abstract Among the multi-trait models used to jointly study several traits and environments, the Bayesian framework has been a preferable tool for using a more complex and biologically realistic model. In most cases, the non-informative prior distributions are adopted in studies using the Bayesian approach. Still, the Bayesian approach tends to present more accurate estimates when it uses informative prior distributions. The present study was developed to evaluate the efficiency and applicability of multi-trait multi-environment (MTME) models under a Bayesian framework utilizing a strategy for eliciting informative prior distribution using previous data from rice. The study involved data pertained to rice genotypes in three environments and five agricultural years (2010/2011 until 2014/2015) for the following traits: grain yield (GY), flowering in days (FLOR) and plant height (PH). Variance components and genetic and non-genetic parameters were estimated by the Bayesian method. In general, the informative prior distribution in Bayesian MTME models provided higher estimates of heritability and variance components, as well as minor lengths for the highest probability density interval (HPD), compared to their respective non-informative prior distribution analyses. The use of more informative prior distributions makes it possible to detect genetic correlations between traits, which cannot be achieved with the use of non-informative prior distributions. Therefore, this mechanism presented for updating knowledge to the elicitation of an informative prior distribution can be efficiently applied in rice genetic selection.


2021 ◽  
Vol 12 ◽  
Author(s):  
Christoph König

Specifying accurate informative prior distributions is a question of carefully selecting studies that comprise the body of comparable background knowledge. Psychological research, however, consists of studies that are being conducted under different circumstances, with different samples and varying instruments. Thus, results of previous studies are heterogeneous, and not all available results can and should contribute equally to an informative prior distribution. This implies a necessary weighting of background information based on the similarity of the previous studies to the focal study at hand. Current approaches to account for heterogeneity by weighting informative prior distributions, such as the power prior and the meta-analytic predictive prior are either not easily accessible or incomplete. To complicate matters further, in the context of Bayesian multiple regression models there are no methods available for quantifying the similarity of a given body of background knowledge to the focal study at hand. Consequently, the purpose of this study is threefold. We first present a novel method to combine the aforementioned sources of heterogeneity in the similarity measure ω. This method is based on a combination of a propensity-score approach to assess the similarity of samples with random- and mixed-effects meta-analytic models to quantify the heterogeneity in outcomes and study characteristics. Second, we show how to use the similarity measure ωas a weight for informative prior distributions for the substantial parameters (regression coefficients) in Bayesian multiple regression models. Third, we investigate the performance and the behavior of the similarity-weighted informative prior distribution in a comprehensive simulation study, where it is compared to the normalized power prior and the meta-analytic predictive prior. The similarity measure ω and the similarity-weighted informative prior distribution as the primary results of this study provide applied researchers with means to specify accurate informative prior distributions.


Author(s):  
Liam F. Beiser-McGrath

Abstract When separation is a problem in binary dependent variable models, many researchers use Firth's penalized maximum likelihood in order to obtain finite estimates (Firth, 1993; Zorn, 2005; Rainey, 2016). In this paper, I show that this approach can lead to inferences in the opposite direction of the separation when the number of observations are sufficiently large and both the dependent and independent variables are rare events. As large datasets with rare events are frequently used in political science, such as dyadic data measuring interstate relations, a lack of awareness of this problem may lead to inferential issues. Simulations and an empirical illustration show that the use of independent “weakly-informative” prior distributions centered at zero, for example, the Cauchy prior suggested by Gelman et al. (2008), can avoid this issue. More generally, the results caution researchers to be aware of how the choice of prior interacts with the structure of their data, when estimating models in the presence of separation.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 929
Author(s):  
Mohammad Reza Mahmoudi ◽  
Mohsen Maleki ◽  
Dumitru Baleanu ◽  
Vu-Thanh Nguyen ◽  
Kim-Hung Pho

In this paper, a Bayesian analysis of finite mixture autoregressive (MAR) models based on the assumption of scale mixtures of skew-normal (SMSN) innovations (called SMSN–MAR) is considered. This model is not simultaneously sensitive to outliers, as the celebrated SMSN distributions, because the proposed MAR model covers the lightly/heavily-tailed symmetric and asymmetric innovations. This model allows us to have robust inferences on some non-linear time series with skewness and heavy tails. Classical inferences about the mixture models have some problematic issues that can be solved using Bayesian approaches. The stochastic representation of the SMSN family allows us to develop a Bayesian analysis considering the informative prior distributions in the proposed model. Some simulations and real data are also presented to illustrate the usefulness of the proposed models.


2020 ◽  
Author(s):  
Falk Heße ◽  
Lars Isachsen ◽  
Sebastian Müller ◽  
Attinger Sabine

<p><span>Characterizing the subsurface of our planet is an important task. Yet compared to many other fields, the characterization of the subsurface is always burdened by large uncertainties. These uncertainties are caused by the general lack of data and the large spatial variability of many subsurface properties. </span><span>Due to their </span><span>comparably </span><span>low costs, pumping test</span><span>s</span><span> are regularly applied for the characterization of groundwater aquifers. The </span><span>classic</span><span> approach is to </span><span>identify the parameters of some conceptual subsurface model</span> <span>by means of curve </span><span>fit</span><span>ting</span><span> some analytical expression </span><span>to the measured drawdown.</span> <span>One of the drawbacks of classic analyzation techniques of pumping tests is the assumption of the existence of a single representative parameter value for the whole aquifer. Consequently, they cannot account for spatial heterogeneities. To address this limitation, a number of studies have proposed extensions of both Thiem’s and Theis’ formula. Using these extensions, it is possible to estimate geostatistical parameters like the mean, variance and correlation length of a heterogeneous conductivity field from pumping tests.</span></p><p><span>W</span><span>hile these methods have demonstrated their ability to estimate </span><span>such</span><span> geostatistical parameters, their data worth has </span><span>rarely</span><span> been investigated within a Bayesian framework. This is particularly relevant since recent developments in the field of Bayesian inference facilitate the derivation of informative prior distributions for these parameters. </span><span>Here, informative means that the prior is</span> <span>based on currently available background data </span><span>and therefore may be able to substantially influence the posterior distribution</span><span>.</span> <span>If this is the case,</span><span> the actual data worth of pumping tests, as well as other subsurface characterization methods, may be lower than assumed.</span></p><p><span>To investigate this possibility, we implemented a series of numerical pumping tests in a synthetic model based on the Herten aquifer. Using informative prior distributions, we derived the posterior distributions over the </span><span>mean, variance and correlation length of </span><span>the synthetic</span><span> heterogeneous conductivity field. </span><span>Our results show that for mean and variance, we already get a substantially lowered data worth for pumping tests when using informative prior distributions, whereas the estimation of the correlation length remains mostly unaffected. These results suggest that with an increasing amount of background data, the data worth of pumping tests may fall </span><span>even lower, meaning that more informative techniques for subsurface characterization will be needed in the future.</span></p><p> </p><p> </p>


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Camille Aupiais ◽  
Corinne Alberti ◽  
Thomas Schmitz ◽  
Olivier Baud ◽  
Moreno Ursino ◽  
...  

Abstract Background When conducing Phase-III trial, regulatory agencies and investigators might want to get reliable information about rare but serious safety outcomes during the trial. Bayesian non-inferiority approaches have been developed, but commonly utilize historical placebo-controlled data to define the margin, depend on a single final analysis, and no recommendation is provided to define the prespecified decision threshold. In this study, we propose a non-inferiority Bayesian approach for sequential monitoring of rare dichotomous safety events incorporating experts’ opinions on margins. Methods A Bayesian decision criterion was constructed to monitor four safety events during a non-inferiority trial conducted on pregnant women at risk for premature delivery. Based on experts’ elicitation, margins were built using mixtures of beta distributions that preserve experts’ variability. Non-informative and informative prior distributions and several decision thresholds were evaluated through an extensive sensitivity analysis. The parameters were selected in order to maintain two rates of misclassifications under prespecified rates, that is, trials that wrongly concluded an unacceptable excess in the experimental arm, or otherwise. Results The opinions of 44 experts were elicited about each event non-inferiority margins and its relative severity. In the illustrative trial, the maximal misclassification rates were adapted to events’ severity. Using those maximal rates, several priors gave good results and one of them was retained for all events. Each event was associated with a specific decision threshold choice, allowing for the consideration of some differences in their prevalence, margins and severity. Our decision rule has been applied to a simulated dataset. Conclusions In settings where evidence is lacking and where some rare but serious safety events have to be monitored during non-inferiority trials, we propose a methodology that avoids an arbitrary margin choice and helps in the decision making at each interim analysis. This decision rule is parametrized to consider the rarity and the relative severity of the events and requires a strong collaboration between physicians and the trial statisticians for the benefit of all. This Bayesian approach could be applied as a complement to the frequentist analysis, so both Data Safety Monitoring Boards and investigators can benefit from such an approach.


2019 ◽  
Vol 35 (3) ◽  
pp. 653-681 ◽  
Author(s):  
Joseph W. Sakshaug ◽  
Arkadiusz Wiśniowski ◽  
Diego Andres Perez Ruiz ◽  
Annelies G. Blom

Abstract Carefully designed probability-based sample surveys can be prohibitively expensive to conduct. As such, many survey organizations have shifted away from using expensive probability samples in favor of less expensive, but possibly less accurate, nonprobability web samples. However, their lower costs and abundant availability make them a potentially useful supplement to traditional probability-based samples. We examine this notion by proposing a method of supplementing small probability samples with nonprobability samples using Bayesian inference. We consider two semi-conjugate informative prior distributions for linear regression coefficients based on nonprobability samples, one accounting for the distance between maximum likelihood coefficients derived from parallel probability and non-probability samples, and the second depending on the variability and size of the nonprobability sample. The method is evaluated in comparison with a reference prior through simulations and a real-data application involving multiple probability and nonprobability surveys fielded simultaneously using the same questionnaire. We show that the method reduces the variance and mean-squared error (MSE) of coefficient estimates and model-based predictions relative to probability-only samples. Using actual and assumed cost data we also show that the method can yield substantial cost savings (up to 55%) for a fixed MSE.


Author(s):  
A. C. Raby ◽  
A. Antonini ◽  
A. Pappas ◽  
D. T. Dassanayake ◽  
J. M. W. Brownjohn ◽  
...  

Lighthouses situated on exposed rocky outcrops warn mariners of the dangers that lurk beneath the waves. They were first constructed when approaches to wave loading and structural response were relatively unsophisticated, essentially learning from previous failures. Here, we chart the evolution of lighthouses on the Wolf Rock, situated between Land's End and the Isles of Scilly in the UK. The first empirical approaches are described, followed by design aspects of the present tower, informed by innovations developed on other rocky outcrops. We focus on a particular development associated with the automation of lighthouses: the helideck platform. The design concept is described and the structure then scrutinized for future survivability, using the latest structural modelling techniques of the entire lighthouse and helideck. Model validation data were obtained through a complex logistical field operation and experimental modal analysis. Extreme wave loading for the model required the identification of the 250-year return period wave using a Bayesian method with informative prior distributions, for two different scenarios (2017 and 2067). The structural models predict responses of the helideck to wave loading which is characterized by differential displacements of 0.093 m (2017) and 0.115 m (2067) with associated high tension forces and plastic strain. This article is part of the theme issue ‘Environmental loading of heritage structures’.


Sign in / Sign up

Export Citation Format

Share Document