scholarly journals The Confidence Density for Correlation

Sankhya A ◽  
2021 ◽  
Author(s):  
Gunnar Taraldsen

AbstractInference for correlation is central in statistics. From a Bayesian viewpoint, the final most complete outcome of inference for the correlation is the posterior distribution. An explicit formula for the posterior density for the correlation for the binormal is derived. This posterior is an optimal confidence distribution and corresponds to a standard objective prior. It coincides with the fiducial introduced by R.A. Fisher in 1930 in his first paper on fiducial inference. C.R. Rao derived an explicit elegant formula for this fiducial density, but the new formula using hypergeometric functions is better suited for numerical calculations. Several examples on real data are presented for illustration. A brief review of the connections between confidence distributions and Bayesian and fiducial inference is given in an Appendix.

Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1573
Author(s):  
Waleed Mohamed Abd-Elhameed ◽  
Badah Mohamed Badah

This article deals with the general linearization problem of Jacobi polynomials. We provide two approaches for finding closed analytical forms of the linearization coefficients of these polynomials. The first approach is built on establishing a new formula in which the moments of the shifted Jacobi polynomials are expressed in terms of other shifted Jacobi polynomials. The derived moments formula involves a hypergeometric function of the type 4F3(1), which cannot be summed in general, but for special choices of the involved parameters, it can be summed. The reduced moments formulas lead to establishing new linearization formulas of certain parameters of Jacobi polynomials. Another approach for obtaining other linearization formulas of some Jacobi polynomials depends on making use of the connection formulas between two different Jacobi polynomials. In the two suggested approaches, we utilize some standard reduction formulas for certain hypergeometric functions of the unit argument such as Watson’s and Chu-Vandermonde identities. Furthermore, some symbolic algebraic computations such as the algorithms of Zeilberger, Petkovsek and van Hoeij may be utilized for the same purpose. As an application of some of the derived linearization formulas, we propose a numerical algorithm to solve the non-linear Riccati differential equation based on the application of the spectral tau method.


2021 ◽  
Vol 33 (1) ◽  
pp. 1-22
Author(s):  
D. Artamonov

The Clebsh–Gordan coefficients for the Lie algebra g l 3 \mathfrak {gl}_3 in the Gelfand–Tsetlin base are calculated. In contrast to previous papers, the result is given as an explicit formula. To obtain the result, a realization of a representation in the space of functions on the group G L 3 GL_3 is used. The keystone fact that allows one to carry the calculation of Clebsh–Gordan coefficients is the theorem that says that functions corresponding to the Gelfand–Tsetlin base vectors can be expressed in terms of generalized hypergeometric functions.


2014 ◽  
Vol 51 (3) ◽  
pp. 640-656 ◽  
Author(s):  
Alessandro Gnoatto ◽  
Martino Grasselli

We derive the explicit formula for the joint Laplace transform of the Wishart process and its time integral, which extends the original approach of Bru (1991). We compare our methodology with the alternative results given by the variation-of-constants method, the linearization of the matrix Riccati ordinary differential equation, and the Runge-Kutta algorithm. The new formula turns out to be fast and accurate.


2002 ◽  
Vol 21 (3) ◽  
pp. 78-82
Author(s):  
V. S.S. Yadavalli ◽  
P. J. Mostert ◽  
A. Bekker ◽  
M. Botha

Bayesian estimation is presented for the stationary rate of disappointments, D∞, for two models (with different specifications) of intermittently used systems. The random variables in the system are considered to be independently exponentially distributed. Jeffreys’ prior is assumed for the unknown parameters in the system. Inference about D∞ is being restrained in both models by the complex and non-linear definition of D∞. Monte Carlo simulation is used to derive the posterior distribution of D∞ and subsequently the highest posterior density (HPD) intervals. A numerical example where Bayes estimates and the HPD intervals are determined illustrates these results. This illustration is extended to determine the frequentistical properties of this Bayes procedure, by calculating covering proportions for each of these HPD intervals, assuming fixed values for the parameters.


2012 ◽  
Vol 102 (4) ◽  
pp. 381-389 ◽  
Author(s):  
Ivan Simko ◽  
Hans-Peter Piepho

The area under the disease progress curve (AUDPC) is frequently used to combine multiple observations of disease progress into a single value. However, our analysis shows that this approach severely underestimates the effect of the first and last observation. To get a better estimate of disease progress, we have developed a new formula termed the area under the disease progress stairs (AUDPS). The AUDPS approach improves the estimation of disease progress by giving a weight closer to optimal to the first and last observations. Analysis of real data indicates that AUDPS outperforms AUDPC in most of the tested trials and may be less precise than AUDPC only when assessments in the first or last observations have a comparatively large variance. We propose using AUDPS and its standardized (sAUDPS) and relative (rAUDPS) forms when combining multiple observations from disease progress experiments into a single value.


Author(s):  
Hiba Zeyada Muhammed ◽  
Essam Abd Elsalam Muhammed

In this paper, Bayesian and non-Bayesian estimation of the inverted Topp-Leone distribution shape parameter are studied when the sample is complete and random censored. The maximum likelihood estimator (MLE) and Bayes estimator of the unknown parameter are proposed. The Bayes estimates (BEs) have been computed based on the squared error loss (SEL) function and using Markov Chain Monte Carlo (MCMC) techniques. The asymptotic, bootstrap (p,t), and highest posterior density intervals are computed. The Metropolis Hasting algorithm is proposed for Bayes estimates. Monte Carlo simulation is performed to compare the performances of the proposed methods and one real data set has been analyzed for illustrative purposes.


2021 ◽  
Vol 6 (10) ◽  
pp. 10789-10801
Author(s):  
Tahani A. Abushal ◽  

<abstract><p>In this paper, the problem of estimating the parameter of Akash distribution applied when the lifetime of the product follow Type-Ⅱ censoring. The maximum likelihood estimators (MLE) are studied for estimating the unknown parameter and reliability characteristics. Approximate confidence interval for the parameter is derived under the s-normal approach to the asymptotic distribution of MLE. The Bayesian inference procedures have been developed under the usual error loss function through Lindley's technique and Metropolis-Hastings algorithm. The highest posterior density interval is developed by using Metropolis-Hastings algorithm. Finally, the performances of the different methods have been compared through a Monte Carlo simulation study. The application to set of real data is also analyzed using proposed methods.</p></abstract>


2018 ◽  
Vol 41 (2) ◽  
pp. 251-267 ◽  
Author(s):  
Abbas Pak ◽  
Arjun Kumar Gupta ◽  
Nayereh Bagheri Khoolenjani

In this paper  we study the reliability of a multicomponent stress-strength model assuming that the components follow power Lindley model.  The maximum likelihood estimate of the reliability parameter and its asymptotic confidence interval are obtained. Applying the parametric Bootstrap technique, interval estimation of the reliability is presented.  Also, the Bayes estimate and highest posterior density credible interval of the reliability parameter are derived using suitable priors on the parameters. Because there is no closed form for the Bayes estimate, we use the Markov Chain Monte Carlo method to obtain approximate Bayes  estimate of the reliability. To evaluate the performances of different procedures,  simulation studies are conducted and an example of real data sets is provided.


2019 ◽  
Vol 69 (2) ◽  
pp. 209-220 ◽  
Author(s):  
Mathieu Fourment ◽  
Andrew F Magee ◽  
Chris Whidden ◽  
Arman Bilge ◽  
Frederick A Matsen ◽  
...  

Abstract The marginal likelihood of a model is a key quantity for assessing the evidence provided by the data in support of a model. The marginal likelihood is the normalizing constant for the posterior density, obtained by integrating the product of the likelihood and the prior with respect to model parameters. Thus, the computational burden of computing the marginal likelihood scales with the dimension of the parameter space. In phylogenetics, where we work with tree topologies that are high-dimensional models, standard approaches to computing marginal likelihoods are very slow. Here, we study methods to quickly compute the marginal likelihood of a single fixed tree topology. We benchmark the speed and accuracy of 19 different methods to compute the marginal likelihood of phylogenetic topologies on a suite of real data sets under the JC69 model. These methods include several new ones that we develop explicitly to solve this problem, as well as existing algorithms that we apply to phylogenetic models for the first time. Altogether, our results show that the accuracy of these methods varies widely, and that accuracy does not necessarily correlate with computational burden. Our newly developed methods are orders of magnitude faster than standard approaches, and in some cases, their accuracy rivals the best established estimators.


Sign in / Sign up

Export Citation Format

Share Document