scholarly journals Bayesian Inference for the Difference of Two Proportion Parameters in Over-Reported Two-Sample Binomial Data Using the Doubly Sample

Stats ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 111-120 ◽  
Author(s):  
Dewi Rahardja

We construct a point and interval estimation using a Bayesian approach for the difference of two population proportion parameters based on two independent samples of binomial data subject to one type of misclassification. Specifically, we derive an easy-to-implement closed-form algorithm for drawing from the posterior distributions. For illustration, we applied our algorithm to a real data example. Finally, we conduct simulation studies to demonstrate the efficiency of our algorithm for Bayesian inference.

2015 ◽  
Vol 30 (1) ◽  
Author(s):  
Dinh Tuan Nguyen ◽  
Yann Dijoux ◽  
Mitra Fouladirad

AbstractThe paper presents a Bayesian approach of the Brown–Proschan imperfect maintenance model. The initial failure rate is assumed to follow a Weibull distribution. A discussion of the choice of informative and non-informative prior distributions is provided. The implementation of the posterior distributions requires the Metropolis-within-Gibbs algorithm. A study on the quality of the estimators of the model obtained from Bayesian and frequentist inference is proposed. An application to real data is finally developed.


Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter provides a self-contained review of Bayesian inference and decision making. It begins with a discussion of Bayesian inference for a simple autoregressive (AR) model, which takes the form of a Gaussian linear regression. For this model, the posterior distribution can be characterized analytically and closed-form expressions for its moments are readily available. The chapter also examines how to turn posterior distributions into point estimates, interval estimates, forecasts, and how to solve general decision problems. The chapter shows how in a Bayesian setting, the calculus of probability is used to characterize and update an individual's state of knowledge or degree of beliefs with respect to quantities such as model parameters or future observations.


Author(s):  
Ayman Baklizi

In this paper, we developed a method for constructing confidence intervals for the parameters of lifetime distributions based on progressively type II censored data. The method produces closed form expressions for the bounds of the confidence intervals for several special cases of parameters and lifetime distributions. Closed form approximations are derived for the intervals for the parameters of the location or scale families of distributions. The method is illustrated with several examples and analyses of real data sets are included to illustrate the application of the method.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
S. Balaswamy ◽  
R. Vishnu Vardhan

In the recent past, the work in the area of ROC analysis gained attention in explaining the accuracy of a test and identification of the optimal threshold. Such types of ROC models are referred to as bidistributional ROC models, for example Binormal, Bi-Exponential, Bi-Logistic and so forth. However, in practical situations, we come across data which are skewed in nature with extended tails. Then to address this issue, the accuracy of a test is to be explained by involving the scale and shape parameters. Hence, the present paper focuses on proposing an ROC model which takes into account two generalized distributions which helps in explaining the accuracy of a test. Further, confidence intervals are constructed for the proposed curve; that is, coordinates of the curve (FPR, TPR) and accuracy measure, Area Under the Curve (AUC), which helps in explaining the variability of the curve and provides the sensitivity at a particular value of specificity and vice versa. The proposed methodology is supported by a real data set and simulation studies.


Author(s):  
Guanghao Qi ◽  
Nilanjan Chatterjee

Abstract Background Previous studies have often evaluated methods for Mendelian randomization (MR) analysis based on simulations that do not adequately reflect the data-generating mechanisms in genome-wide association studies (GWAS) and there are often discrepancies in the performance of MR methods in simulations and real data sets. Methods We use a simulation framework that generates data on full GWAS for two traits under a realistic model for effect-size distribution coherent with the heritability, co-heritability and polygenicity typically observed for complex traits. We further use recent data generated from GWAS of 38 biomarkers in the UK Biobank and performed down sampling to investigate trends in estimates of causal effects of these biomarkers on the risk of type 2 diabetes (T2D). Results Simulation studies show that weighted mode and MRMix are the only two methods that maintain the correct type I error rate in a diverse set of scenarios. Between the two methods, MRMix tends to be more powerful for larger GWAS whereas the opposite is true for smaller sample sizes. Among the other methods, random-effect IVW (inverse-variance weighted method), MR-Robust and MR-RAPS (robust adjust profile score) tend to perform best in maintaining a low mean-squared error when the InSIDE assumption is satisfied, but can produce large bias when InSIDE is violated. In real-data analysis, some biomarkers showed major heterogeneity in estimates of their causal effects on the risk of T2D across the different methods and estimates from many methods trended in one direction with increasing sample size with patterns similar to those observed in simulation studies. Conclusion The relative performance of different MR methods depends heavily on the sample sizes of the underlying GWAS, the proportion of valid instruments and the validity of the InSIDE assumption. Down-sampling analysis can be used in large GWAS for the possible detection of bias in the MR methods.


2021 ◽  
Vol 10 (7) ◽  
pp. 435
Author(s):  
Yongbo Wang ◽  
Nanshan Zheng ◽  
Zhengfu Bian

Since pairwise registration is a necessary step for the seamless fusion of point clouds from neighboring stations, a closed-form solution to planar feature-based registration of LiDAR (Light Detection and Ranging) point clouds is proposed in this paper. Based on the Plücker coordinate-based representation of linear features in three-dimensional space, a quad tuple-based representation of planar features is introduced, which makes it possible to directly determine the difference between any two planar features. Dual quaternions are employed to represent spatial transformation and operations between dual quaternions and the quad tuple-based representation of planar features are given, with which an error norm is constructed. Based on L2-norm-minimization, detailed derivations of the proposed solution are explained step by step. Two experiments were designed in which simulated data and real data were both used to verify the correctness and the feasibility of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters, which verifies the correctness of the presented solution. With the real data, the calculated registration results were consistent with the results calculated by iterative methods. Conclusions can be drawn from the two experiments: (1) The proposed solution does not require any initial estimates of the unknown parameters in advance, which assures the stability and robustness of the solution; (2) Using dual quaternions to represent spatial transformation greatly reduces the additional constraints in the estimation process.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i675-i683
Author(s):  
Sudhir Kumar ◽  
Antonia Chroni ◽  
Koichiro Tamura ◽  
Maxwell Sanderford ◽  
Olumide Oladeinde ◽  
...  

Abstract Summary Metastases cause a vast majority of cancer morbidity and mortality. Metastatic clones are formed by dispersal of cancer cells to secondary tissues, and are not medically detected or visible until later stages of cancer development. Clone phylogenies within patients provide a means of tracing the otherwise inaccessible dynamic history of migrations of cancer cells. Here, we present a new Bayesian approach, PathFinder, for reconstructing the routes of cancer cell migrations. PathFinder uses the clone phylogeny, the number of mutational differences among clones, and the information on the presence and absence of observed clones in primary and metastatic tumors. By analyzing simulated datasets, we found that PathFinder performes well in reconstructing clone migrations from the primary tumor to new metastases as well as between metastases. It was more challenging to trace migrations from metastases back to primary tumors. We found that a vast majority of errors can be corrected by sampling more clones per tumor, and by increasing the number of genetic variants assayed per clone. We also identified situations in which phylogenetic approaches alone are not sufficient to reconstruct migration routes. In conclusion, we anticipate that the use of PathFinder will enable a more reliable inference of migration histories and their posterior probabilities, which is required to assess the relative preponderance of seeding of new metastasis by clones from primary tumors and/or existing metastases. Availability and implementation PathFinder is available on the web at https://github.com/SayakaMiura/PathFinder.


Biometrika ◽  
2020 ◽  
Author(s):  
S Na ◽  
M Kolar ◽  
O Koyejo

Abstract Differential graphical models are designed to represent the difference between the conditional dependence structures of two groups, thus are of particular interest for scientific investigation. Motivated by modern applications, this manuscript considers an extended setting where each group is generated by a latent variable Gaussian graphical model. Due to the existence of latent factors, the differential network is decomposed into sparse and low-rank components, both of which are symmetric indefinite matrices. We estimate these two components simultaneously using a two-stage procedure: (i) an initialization stage, which computes a simple, consistent estimator, and (ii) a convergence stage, implemented using a projected alternating gradient descent algorithm applied to a nonconvex objective, initialized using the output of the first stage. We prove that given the initialization, the estimator converges linearly with a nontrivial, minimax optimal statistical error. Experiments on synthetic and real data illustrate that the proposed nonconvex procedure outperforms existing methods.


Sign in / Sign up

Export Citation Format

Share Document