About the Multimodality of the Likelihood Function when Estimating the Variance Components in a One-Way Classification by Means of the Ml or REML Method

Author(s):  
V. Guiard
2020 ◽  
Author(s):  
Muhammad Ammar Malik ◽  
Tom Michoel

AbstractLinear mixed modelling is a popular approach for detecting and correcting spurious sample correlations due to hidden confounders in genome-wide gene expression data. In applications where some confounding factors are known, estimating simultaneously the contribution of known and latent variance components in linear mixed models is a challenge that has so far relied on numerical gradient-based optimizers to maximize the likelihood function. This is unsatisfactory because the resulting solution is poorly characterized and the efficiency of the method may be suboptimal. Here we prove analytically that maximumlikelihood latent variables can always be chosen orthogonal to the known confounding factors, in other words, that maximum-likelihood latent variables explain sample covariances not already explained by known factors. Based on this result we propose a restricted maximum-likelihood method which estimates the latent variables by maximizing the likelihood on the restricted subspace orthogonal to the known confounding factors, and show that this reduces to probabilistic PCA on that subspace. The method then estimates the variance-covariance parameters by maximizing the remaining terms in the likelihood function given the latent variables, using a newly derived analytic solution for this problem. Compared to gradient-based optimizers, our method attains equal or higher likelihood values, can be computed using standard matrix operations, results in latent factors that don’t overlap with any known factors, and has a runtime reduced by several orders of magnitude. We anticipate that the restricted maximum-likelihood method will facilitate the application of linear mixed modelling strategies for learning latent variance components to much larger gene expression datasets than currently possible.


Author(s):  
Muhammad Ammar Malik ◽  
Tom Michoel

Abstract Random effects models are popular statistical models for detecting and correcting spurious sample correlations due to hidden confounders in genome-wide gene expression data. In applications where some confounding factors are known, estimating simultaneously the contribution of known and latent variance components in random effects models is a challenge that has so far relied on numerical gradient-based optimizers to maximize the likelihood function. This is unsatisfactory because the resulting solution is poorly characterized and the efficiency of the method may be suboptimal. Here we prove analytically that maximum-likelihood latent variables can always be chosen orthogonal to the known confounding factors, in other words, that maximum-likelihood latent variables explain sample covariances not already explained by known factors. Based on this result we propose a restricted maximum-likelihood method which estimates the latent variables by maximizing the likelihood on the restricted subspace orthogonal to the known confounding factors, and show that this reduces to probabilistic PCA on that subspace. The method then estimates the variance-covariance parameters by maximizing the remaining terms in the likelihood function given the latent variables, using a newly derived analytic solution for this problem. Compared to gradient-based optimizers, our method attains greater or equal likelihood values, can be computed using standard matrix operations, results in latent factors that don’t overlap with any known factors, and has a runtime reduced by several orders of magnitude. Hence the restricted maximum-likelihood method facilitates the application of random effects modelling strategies for learning latent variance components to much larger gene expression datasets than possible with current methods.


Author(s):  
Antara Dasgupta ◽  
Renaud Hostache ◽  
RAAJ Ramasankaran ◽  
Guy J.‐P Schumann ◽  
Stefania Grimaldi ◽  
...  

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


Author(s):  
T. V. Oblakova

The paper is studying the justification of the Pearson criterion for checking the hypothesis on the uniform distribution of the general totality. If the distribution parameters are unknown, then estimates of the theoretical frequencies are used [1, 2, 3]. In this case the quantile of the chi-square distribution with the number of degrees of freedom, reduced by the number of parameters evaluated, is used to determine the upper threshold of the main hypothesis acceptance [7]. However, in the case of a uniform law, the application of Pearson's criterion does not extend to complex hypotheses, since the likelihood function does not allow differentiation with respect to parameters, which is used in the proof of the theorem mentioned [7, 10, 11].A statistical experiment is proposed in order to study the distribution of Pearson statistics for samples from a uniform law. The essence of the experiment is that at first a statistically significant number of one-type samples from a given uniform distribution is modeled, then for each sample Pearson statistics are calculated, and then the law of distribution of the totality of these statistics is studied. Modeling and processing of samples were performed in the Mathcad 15 package using the built-in random number generator and array processing facilities.In all the experiments carried out, the hypothesis that the Pearson statistics conform to the chi-square law was unambiguously accepted (confidence level 0.95). It is also statistically proved that the number of degrees of freedom in the case of a complex hypothesis need not be corrected. That is, the maximum likelihood estimates of the uniform law parameters implicitly used in calculating Pearson statistics do not affect the number of degrees of freedom, which is thus determined by the number of grouping intervals only.


2017 ◽  
Author(s):  
Darren Rhodes

Time is a fundamental dimension of human perception, cognition and action, as the perception and cognition of temporal information is essential for everyday activities and survival. Innumerable studies have investigated the perception of time over the last 100 years, but the neural and computational bases for the processing of time remains unknown. First, we present a brief history of research and the methods used in time perception and then discuss the psychophysical approach to time, extant models of time perception, and advancing inconsistencies between each account that this review aims to bridge the gap between. Recent work has advocated a Bayesian approach to time perception. This framework has been applied to both duration and perceived timing, where prior expectations about when a stimulus might occur in the future (prior distribution) are combined with current sensory evidence (likelihood function) in order to generate the perception of temporal properties (posterior distribution). In general, these models predict that the brain uses temporal expectations to bias perception in a way that stimuli are ‘regularized’ i.e. stimuli look more like what has been seen before. Evidence for this framework has been found using human psychophysical testing (experimental methods to quantify behaviour in the perceptual system). Finally, an outlook for how these models can advance future research in temporal perception is discussed.


Sign in / Sign up

Export Citation Format

Share Document