bayesian estimator
Recently Published Documents


TOTAL DOCUMENTS

171
(FIVE YEARS 48)

H-INDEX

18
(FIVE YEARS 2)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 125
Author(s):  
Damián G. Hernández ◽  
Inés Samengo

Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature.


Author(s):  
Alexandre Destere ◽  
Charlotte Salmon Gandonnière ◽  
Anders Åsberg ◽  
Véronique Loustaud‐Ratti ◽  
Paul Carrier ◽  
...  

2021 ◽  
pp. 096228022110654
Author(s):  
Ashwini Joshi ◽  
Angelika Geroldinger ◽  
Lena Jiricka ◽  
Pralay Senchaudhuri ◽  
Christopher Corcoran ◽  
...  

Poisson regression can be challenging with sparse data, in particular with certain data constellations where maximum likelihood estimates of regression coefficients do not exist. This paper provides a comprehensive evaluation of methods that give finite regression coefficients when maximum likelihood estimates do not exist, including Firth’s general approach to bias reduction, exact conditional Poisson regression, and a Bayesian estimator using weakly informative priors that can be obtained via data augmentation. Furthermore, we include in our evaluation a new proposal for a modification of Firth’s approach, improving its performance for predictions without compromising its attractive bias-correcting properties for regression coefficients. We illustrate the issue of the nonexistence of maximum likelihood estimates with a dataset arising from the recent outbreak of COVID-19 and an example from implant dentistry. All methods are evaluated in a comprehensive simulation study under a variety of realistic scenarios, evaluating their performance for prediction and estimation. To conclude, while exact conditional Poisson regression may be confined to small data sets only, both the modification of Firth’s approach and the Bayesian estimator are universally applicable solutions with attractive properties for prediction and estimation. While the Bayesian method needs specification of prior variances for the regression coefficients, the modified Firth approach does not require any user input.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Afrah Al-Bossly

The main contribution of this work is the development of a compound LINEX loss function (CLLF) to estimate the shape parameter of the Lomax distribution (LD). The weights are merged into the CLLF to generate a new loss function called the weighted compound LINEX loss function (WCLLF). Then, the WCLLF is used to estimate the LD shape parameter through Bayesian and expected Bayesian (E-Bayesian) estimation. Subsequently, we discuss six different types of loss functions, including square error loss function (SELF), LINEX loss function (LLF), asymmetric loss function (ASLF), entropy loss function (ENLF), CLLF, and WCLLF. In addition, in order to check the performance of the proposed loss function, the Bayesian estimator of WCLLF and the E-Bayesian estimator of WCLLF are used, by performing Monte Carlo simulations. The Bayesian and expected Bayesian by using the proposed loss function is compared with other methods, including maximum likelihood estimation (MLE) and Bayesian and E-Bayesian estimators under different loss functions. The simulation results show that the Bayes estimator according to WCLLF and the E-Bayesian estimator according to WCLLF proposed in this work have the best performance in estimating the shape parameters based on the least mean averaged squared error.


Author(s):  
Hilton Tnunay ◽  
Okechi Onuoha ◽  
Zhengtao Ding
Keyword(s):  

Author(s):  
Lucas Almeida Andrade ◽  
Wandklebson Silva da Paz ◽  
Alanna G. C. Fontes Lima ◽  
Damião da Conceição Araújo ◽  
Andrezza M. Duque ◽  
...  

Currently, the world is facing a severe pandemic caused by the new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus. Although the WHO has recommended preventive measures to limit its spread, Brazil has neglected most of these recommendations, and consequently, our country has the second largest number of deaths from COVID-19 worldwide. In addition, recent studies have shown the relationship between socioeconomic inequalities and the risk of severe COVID-19 infection. Herein, we aimed to assess the spatiotemporal distribution of mortality and lethality rates of COVID-19 in a region of high social vulnerability in Brazil (Northeast region) during the first year of the pandemic. A segmented log-linear regression model was applied to assess temporal trends of mortality and case fatality rate (CFR) and according to the social vulnerability index (SVI). The Local Empirical Bayesian Estimator and Global Moran Index were used for spatial analysis. We conducted a retrospective space–time scan to map clusters at high risk of death from COVID-19. A total of 66,358 COVID-19–related deaths were reported during this period. The mortality rate was 116.2/100,000 inhabitants, and the CFR was 2.3%. Nevertheless, CFR was > 7.5% in 27 municipalities (1.5%). We observed an increasing trend of deaths in this region (AMCP = 18.2; P = 0.001). Also, increasing trends were observed in municipalities with high (N = 859) and very high SVI (N = 587). We identified two significant spatiotemporal clusters of deaths by COVID-19 in this Brazilian region (P = 0.001), and most high-risk municipalities were on the coastal strip of the region. Taken together, our analyses demonstrate that the pandemic has been responsible for several deaths in Northeast Brazil, with clusters at high risk of mortality mainly in municipalities on the coastline and those with high SVI.


Author(s):  
Tom Nanga ◽  
Jean-Baptiste Woillard ◽  
Annick Rousseau ◽  
Pierre Marquet ◽  
Aurélie Prémaud

Background: Mycophenolate mofetil (MMF) is the most widely used second-line agent in auto-immune hepatitis (AIH). It is generally titrated up to patient response and continued for at least two years following complete liver enzyme normalization. However, in this maintenance phase individual dose adjustment to reach mycophenolic acid (MPA) exposure with the best benefit-risk probability may avoid adverse outcomes. The aim of the present study was to develop population pharmacokinetic (popPK) models and Maximum A-Posteriori Bayesian estimators (MAP-BEs) to estimate MPA inter-dose area under the curve (AUC0-12h) in AIH patients administered MMF using nonlinear mixed effect modelling. Methods: We analysed 50 MPA PK profiles from 34 different patients, together with some demographic, clinical, and laboratory test data. The median number of samples per profile, immediately preceding and following the morning MMF dose, was 7 [4 – 10]. PopPK modeling was performed using parametric, top-down, nonlinear mixed effect modelling with NONMEM 7.3. MAP-BEs were developed based on the the best popPK model and the best limited sampling strategy (LSS) selected among several. Results: The pharmacokinetic data were best described by a 2-compartment model, Erlang distribution to describe the absorption phase, and a proportional error. The best MAP-BE relied on the LSS at 0.33, 1 and 3 hours after mycophenolate mofetil dose administration and was very accurate (bias=5.6%) and precise (RMSE<20%). Conclusion: The precise and accurate Bayesian estimator developed in this study for AIH patients on MMF can be used to improve the therapeutic management of these patients.


2021 ◽  
Author(s):  
Javaid Ahmad Reshi ◽  
Bilal Ahmad Para ◽  
Shahzad Ahmad Bhat

This paper deals with estimation of parameters of Weighted Maxwell-Boltzmann Distribution by using Classical and Bayesian Paradigm. Under Classical Approach, we have estimated the rate parameter using Maximum likelihood Estimator. In Bayesian Paradigm, we have primarily studied the Bayes’ estimator of the parameter of the Weighted Maxwell-Boltzmann Distribution under the extended Jeffrey’s prior, Gamma and exponential prior distributions assuming different loss functions. The extended Jeffrey’s prior gives the opportunity of covering wide spectrum of priors to get Bayes’ estimates of the parameter – particular cases of which are Jeffrey’s prior and Hartigan’s prior. A comparative study has been done between the MLE and the estimates of different loss functions (SELF and Al-Bayyati’s, Stein and Precautionary new loss function). From the results, we observe that in most cases, Bayesian Estimator under New Loss function (Al-Bayyati’s Loss function) has the smallest Mean Squared Error values for both prior’s i.e., Jeffrey’s and an extension of Jeffrey’s prior information. Moreover, when the sample size increases, the MSE decreases quite significantly. These estimators are then compared in terms of mean square error (MSE) which is computed by using the programming language R. Also, two types of real life data sets are considered for making the model comparison between special cases of Weighted Maxwell-Boltzmann Distribution in terms of fitting.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 04) ◽  
pp. 1045-1055
Author(s):  
Sup arman ◽  
Yahya Hairun ◽  
Idrus Alhaddad ◽  
Tedy Machmud ◽  
Hery Suharna ◽  
...  

The application of the Bootstrap-Metropolis-Hastings algorithm is limited to fixed dimension models. In various fields, data often has a variable dimension model. The Laplacian autoregressive (AR) model includes a variable dimension model so that the Bootstrap-Metropolis-Hasting algorithm cannot be applied. This article aims to develop a Bootstrap reversible jump Markov Chain Monte Carlo (MCMC) algorithm to estimate the Laplacian AR model. The parameters of the Laplacian AR model were estimated using a Bayesian approach. The posterior distribution has a complex structure so that the Bayesian estimator cannot be calculated analytically. The Bootstrap-reversible jump MCMC algorithm was applied to calculate the Bayes estimator. This study provides a procedure for estimating the parameters of the Laplacian AR model. Algorithm performance was tested using simulation studies. Furthermore, the algorithm is applied to the finance sector to predict stock price on the stock market. In general, this study can be useful for decision makers in predicting future events. The novelty of this study is the triangulation between the bootstrap algorithm and the reversible jump MCMC algorithm. The Bootstrap-reversible jump MCMC algorithm is useful especially when the data is large and the data has a variable dimension model. The study can be extended to the Laplacian Autoregressive Moving Average (ARMA) model.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1256
Author(s):  
Abdullah M. Almarashi ◽  
Ali Algarni ◽  
Amal S. Hassan ◽  
Ahmed N. Zaky ◽  
Mohammed Elgarhy

Dynamic cumulative residual (DCR) entropy is a valuable randomness metric that may be used in survival analysis. The Bayesian estimator of the DCR Rényi entropy (DCRRéE) for the Lindley distribution using the gamma prior is discussed in this article. Using a number of selective loss functions, the Bayesian estimator and the Bayesian credible interval are calculated. In order to compare the theoretical results, a Monte Carlo simulation experiment is proposed. Generally, we note that for a small true value of the DCRRéE, the Bayesian estimates under the linear exponential loss function are favorable compared to the others based on this simulation study. Furthermore, for large true values of the DCRRéE, the Bayesian estimate under the precautionary loss function is more suitable than the others. The Bayesian estimates of the DCRRéE work well when increasing the sample size. Real-world data is evaluated for further clarification, allowing the theoretical results to be validated.


Sign in / Sign up

Export Citation Format

Share Document