scholarly journals Posterior distribution of nondifferentiable functions

Author(s):  
Toru Kitagawa ◽  
Jonathan Payne ◽  
Jose Luis Montiel Olea

2016 ◽  
Author(s):  
Toru Kitagawa ◽  
Jonathan Payne ◽  
Jose Luis Montiel Olea


2019 ◽  
Author(s):  
Amilcar Velez ◽  
Jonathan Payne ◽  
Jose Luis Montiel Olea ◽  
Toru Kitagawa


2020 ◽  
Vol 217 (1) ◽  
pp. 161-175
Author(s):  
Toru Kitagawa ◽  
José Luis Montiel Olea ◽  
Jonathan Payne ◽  
Amilcar Velez


Author(s):  
Cecilia Viscardi ◽  
Michele Boreale ◽  
Fabio Corradi

AbstractWe consider the problem of sample degeneracy in Approximate Bayesian Computation. It arises when proposed values of the parameters, once given as input to the generative model, rarely lead to simulations resembling the observed data and are hence discarded. Such “poor” parameter proposals do not contribute at all to the representation of the parameter’s posterior distribution. This leads to a very large number of required simulations and/or a waste of computational resources, as well as to distortions in the computed posterior distribution. To mitigate this problem, we propose an algorithm, referred to as the Large Deviations Weighted Approximate Bayesian Computation algorithm, where, via Sanov’s Theorem, strictly positive weights are computed for all proposed parameters, thus avoiding the rejection step altogether. In order to derive a computable asymptotic approximation from Sanov’s result, we adopt the information theoretic “method of types” formulation of the method of Large Deviations, thus restricting our attention to models for i.i.d. discrete random variables. Finally, we experimentally evaluate our method through a proof-of-concept implementation.



2020 ◽  
pp. 1-11
Author(s):  
Hui Wang ◽  
Huang Shiwang

The various parts of the traditional financial supervision and management system can no longer meet the current needs, and further improvement is urgently needed. In this paper, the low-frequency data is regarded as the missing of the high-frequency data, and the mixed frequency VAR model is adopted. In order to overcome the problems caused by too many parameters of the VAR model, this paper adopts the Bayesian estimation method based on the Minnesota prior to obtain the posterior distribution of each parameter of the VAR model. Moreover, this paper uses methods based on Kalman filtering and Kalman smoothing to obtain the posterior distribution of latent state variables. Then, according to the posterior distribution of the VAR model parameters and the posterior distribution of the latent state variables, this paper uses the Gibbs sampling method to obtain the mixed Bayes vector autoregressive model and the estimation of the state variables. Finally, this article studies the influence of Internet finance on monetary policy with examples. The research results show that the method proposed in this article has a certain effect.



2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.



2012 ◽  
Vol 2 (1) ◽  
pp. 7 ◽  
Author(s):  
Andrzej Kijko

This work is focused on the Bayesian procedure for the estimation of the regional maximum possible earthquake magnitude <em>m</em><sub>max</sub>. The paper briefly discusses the currently used Bayesian procedure for m<sub>max</sub>, as developed by Cornell, and a statistically justifiable alternative approach is suggested. The fundamental problem in the application of the current Bayesian formalism for <em>m</em><sub>max</sub> estimation is that one of the components of the posterior distribution is the sample likelihood function, for which the range of observations (earthquake magnitudes) depends on the unknown parameter <em>m</em><sub>max</sub>. This dependence violates the property of regularity of the maximum likelihood function. The resulting likelihood function, therefore, reaches its maximum at the maximum observed earthquake magnitude <em>m</em><sup>obs</sup><sub>max</sub> and not at the required maximum <em>possible</em> magnitude <em>m</em><sub>max</sub>. Since the sample likelihood function is a key component of the posterior distribution, the posterior estimate of <em>m^</em><sub>max</sub> is biased. The degree of the bias and its sign depend on the applied Bayesian estimator, the quantity of information provided by the prior distribution, and the sample likelihood function. It has been shown that if the maximum posterior estimate is used, the bias is negative and the resulting underestimation of <em>m</em><sub>max</sub> can be as big as 0.5 units of magnitude. This study explores only the maximum posterior estimate of <em>m</em><sub>max</sub>, which is conceptionally close to the classic maximum likelihood estimation. However, conclusions regarding the shortfall of the current Bayesian procedure are applicable to all Bayesian estimators, <em>e.g.</em> posterior mean and posterior median. A simple, <em>ad hoc</em> solution of this non-regular maximum likelihood problem is also presented.





1990 ◽  
Vol 42 (3) ◽  
pp. 437-446 ◽  
Author(s):  
Thomas W. Reiland

The concept of invexity is extended to nondifferentiable functions. Characterisations of nonsmooth invexity are derived as well as results in unconstrained and constrained optimisation and duality. The principal analytic tool is the generalised gradient of Clarke for Lipschitz functions.



Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 642 ◽  
Author(s):  
Erlandson Saraiva ◽  
Adriano Suzuki ◽  
Luis Milan

In this paper, we study the performance of Bayesian computational methods to estimate the parameters of a bivariate survival model based on the Ali–Mikhail–Haq copula with marginal distributions given by Weibull distributions. The estimation procedure was based on Monte Carlo Markov Chain (MCMC) algorithms. We present three version of the Metropolis–Hastings algorithm: Independent Metropolis–Hastings (IMH), Random Walk Metropolis (RWM) and Metropolis–Hastings with a natural-candidate generating density (MH). Since the creation of a good candidate generating density in IMH and RWM may be difficult, we also describe how to update a parameter of interest using the slice sampling (SS) method. A simulation study was carried out to compare the performances of the IMH, RWM and SS. A comparison was made using the sample root mean square error as an indicator of performance. Results obtained from the simulations show that the SS algorithm is an effective alternative to the IMH and RWM methods when simulating values from the posterior distribution, especially for small sample sizes. We also applied these methods to a real data set.



Sign in / Sign up

Export Citation Format

Share Document