Application of empirical bayes inference to estimation of rate of change in the presence of informative right censoring

1992 ◽  
Vol 11 (5) ◽  
pp. 621-631 ◽  
Author(s):  
Motomi Mori ◽  
George G. Woodworth ◽  
Robert F. Woolson
2015 ◽  
Vol 2015 ◽  
pp. 1-5
Author(s):  
Naiyi Li ◽  
Yuan Li ◽  
Yongming Li ◽  
Yang Liu

This research is based on ranked set sampling. Through the analysis and proof, the empirical Bayes test rule and asymptotical property for the parameter of power distribution are obtained.


Author(s):  
Hau-Tieng Wu ◽  
Tze Leung Lai ◽  
Gabriel G. Haddad ◽  
Alysson Muotri

Herein we describe new frontiers in mathematical modeling and statistical analysis of oscillatory biomedical signals, motivated by our recent studies of network formation in the human brain during the early stages of life and studies forty years ago on cardiorespiratory patterns during sleep in infants and animal models. The frontiers involve new nonlinear-type time–frequency analysis of signals with multiple oscillatory components, and efficient particle filters for joint state and parameter estimators together with uncertainty quantification in hidden Markov models and empirical Bayes inference.


Genetics ◽  
2007 ◽  
Vol 177 (2) ◽  
pp. 861-873 ◽  
Author(s):  
Shuichi Kitada ◽  
Toshihide Kitakado ◽  
Hirohisa Kishino

Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1387
Author(s):  
Chi-Ken Lu ◽  
Patrick Shafto

It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.


Sign in / Sign up

Export Citation Format

Share Document