Posterior contraction in sparse generalized linear models

Biometrika ◽  
2020 ◽  
Author(s):  
Seonghyun Jeong ◽  
Subhashis Ghosal

Summary We study posterior contraction rates in sparse high-dimensional generalized linear models using priors incorporating sparsity. A mixture of a point mass at zero and a continuous distribution is used as the prior distribution on regression coefficients. In addition to the usual posterior, the fractional posterior, which is obtained by applying Bayes theorem with a fractional power of the likelihood, is also considered. The latter allows uniformity in posterior contraction over a larger subset of the parameter space. In our set-up, the link function of the generalized linear model need not be canonical. We show that Bayesian methods achieve convergence properties analogous to lasso-type procedures. Our results can be used to derive posterior contraction rates in many generalized linear models including logistic, Poisson regression and others.

2007 ◽  
Vol 89 (4) ◽  
pp. 245-257 ◽  
Author(s):  
Dörte Wittenburg ◽  
Volker Guiard ◽  
Friedrich Liese ◽  
Norbert Reinsch

SummaryQuantitative trait loci (QTLs) may affect not only the mean of a trait but also its variability. A special aspect is the variability between multiple measured traits of genotyped animals, such as the within-litter variance of piglet birth weights. The sample variance of repeated measurements is assigned as an observation for every genotyped individual. It is shown that the conditional distribution of the non-normally distributed trait can be approximated by a gamma distribution. To detect QTL effects in the daughter design, a generalized linear model with the identity link function is applied. Suitable test statistics are constructed to test the null hypothesis H0: No QTL with effect on the within-litter variance is segregating versus HA: There is a QTL with effect on the variability of birth weight within litter. Furthermore, estimates of the QTL effect and the QTL position are introduced and discussed. The efficiency of the presented tests is compared with a test based on weighted regression. The error probability of the first type as well as the power of QTL detection are discussed and compared for the different tests.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Rasaki Olawale Olanrewaju

A Gamma distributed response is subjected to regression penalized likelihood estimations of Least Absolute Shrinkage and Selection Operator (LASSO) and Minimax Concave Penalty via Generalized Linear Models (GLMs). The Gamma related disturbance controls the influence of skewness and spread in the corrected path solutions of the regression coefficients.


2020 ◽  
Vol 18 (1) ◽  
pp. 2-15
Author(s):  
Thomas J. Smith ◽  
David A. Walker ◽  
Cornelius M. McKenna

The purpose of this study is to examine issues involved with choice of a link function in generalized linear models with ordinal outcomes, including distributional appropriateness, link specificity, and palindromic invariance are discussed and an exemplar analysis provided using the Pew Research Center 25th anniversary of the Web Omnibus Survey data. Simulated data are used to compare the relative palindromic invariance of four distinct indices of determination/discrimination, including a newly proposed index by Smith et al. (2017).


1999 ◽  
Vol 11 (5) ◽  
pp. 1183-1198 ◽  
Author(s):  
Wenxin Jiang ◽  
Martin A. Tanner

We investigate a class of hierarchical mixtures-of-experts (HME) models where generalized linear models with nonlinear mean functions of the form ψ(α + xTβ) are mixed. Here ψ(·) is the inverse link function. It is shown that mixtures of such mean functions can approximate a class of smooth functions of the form ψ(h(x)), where h(·) ε W∞2;k (a Sobolev class over [0, 1]s, as the number of experts m in the network increases. An upper bound of the approximation rate is given as O(m−2/s) in Lp norm. This rate can be achieved within the family of HME structures with no more than s-layers, where s is the dimension of the predictor x.


2019 ◽  
Author(s):  
Kenneth W. Latimer ◽  
Adrienne L. Fairhall

AbstractSingle neurons can dynamically change the gain of their spiking responses to account for shifts in stimulus variance. Moreover, gain adaptation can occur across multiple timescales. Here, we examine the ability of a simple statistical model of spike trains, the generalized linear model (GLM), to account for these adaptive effects. The GLM describes spiking as a Poisson process whose rate depends on a linear combination of the stimulus and recent spike history. The GLM successfully replicates gain scaling observed in Hodgkin-Huxley simulations of cortical neurons that occurs when the ratio of spike-generating potassium and sodium conductances approaches one. Gain scaling in the GLM depends on the length and shape of the spike history filter. Additionally, the GLM captures adaptation that occurs over multiple timescales as a fractional derivative of the stimulus variance, which has been observed in neurons that include long timescale after hyperpolarization conductances. Fractional differentiation in GLMs requires long spike history that span several seconds. Together, these results demonstrate that the GLM provides a tractable statistical approach for examining single-neuron adaptive computations in response to changes in stimulus variance.


2021 ◽  
pp. 195-208
Author(s):  
Andy Hector

This chapter revisits a regression analysis to explore the normal least squares assumption of approximately equal variance. It also considers some of the data transformations that can be used to achieve this. A linear regression of transformed data is compared with a generalized linear-model equivalent that avoids transformation by using a link function and non-normal distributions. Generalized linear models based on maximum likelihood use a link function to model the mean (in this case a square-root link) and a variance function to model the variability (in this case the gamma distribution, where the variance increases as the square of the mean). The Box–Cox family of transformations is explained in detail.


Sign in / Sign up

Export Citation Format

Share Document