scholarly journals Omitted Variable Bias in GLMs of Neural Spiking Activity

2018 ◽  
Vol 30 (12) ◽  
pp. 3227-3258 ◽  
Author(s):  
Ian H. Stevenson

Generalized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables, as well as the dynamics of single neurons. However, in any given experiment, many variables that have an impact on neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex. We find that including traditionally omitted variables changes estimates of the original parameters and that modulation originally attributed to one variable is reduced after new variables are included. In GLMs describing single-neuron dynamics, we then demonstrate how postspike history effects can also be biased by omitted variables. Here we find that omitted variable bias can lead to mistaken conclusions about the stability of single-neuron firing. Omitted variable bias can appear in any model with confounders—where omitted variables modulate neural activity and the effects of the omitted variables covary with the included effects. Understanding how and to what extent omitted variable bias affects parameter estimates is likely to be important for interpreting the parameters and predictions of many neural encoding models.

2018 ◽  
Author(s):  
Ian H. Stevenson

AbstractGeneralized linear models (GLMs) have a wide range of applications in systems neuroscience describing the encoding of stimulus and behavioral variables as well as the dynamics of single neurons. However, in any given experiment, many variables that impact neural activity are not observed or not modeled. Here we demonstrate, in both theory and practice, how these omitted variables can result in biased parameter estimates for the effects that are included. In three case studies, we estimate tuning functions for common experiments in motor cortex, hippocampus, and visual cortex. We find that including traditionally omitted variables changes estimates of the original parameters and that modulation originally attributed to one variable is reduced after new variables are included. In GLMs describing single-neuron dynamics, we then demonstrate how post-spike history effects can also be biased by omitted variables. Here we find that omitted variable bias can lead to mistaken conclusions about the stability of single neuron firing. Omitted variable bias can appear in any model with confounders – where omitted variables modulate neural activity and the effects of the omitted variables covary with the included effects. Understanding how and to what extent omitted variable bias affects parameter estimates is likely to be important for interpreting the parameters and predictions of many neural encoding models.


2021 ◽  
Vol 40 (9) ◽  
pp. 646-654
Author(s):  
Henning Hoeber

When inversions use incorrectly specified models, the estimated least-squares model parameters are biased. Their expected values are not the true underlying quantitative parameters being estimated. This means the least-squares model parameters cannot be compared to the equivalent values from forward modeling. In addition, the bias propagates into other quantities, such as elastic reflectivities in amplitude variation with offset (AVO) analysis. I give an outline of the framework to analyze bias, provided by the theory of omitted variable bias (OVB). I use OVB to calculate exactly the bias due to model misspecification in linearized isotropic two-term AVO. The resulting equations can be used to forward model unbiased AVO quantities, using the least-squares fit results, the weights given by OVB analysis, and the omitted variables. I show how uncertainty due to bias propagates into derived quantities, such as the χ-angle and elastic reflectivity expressions. The result can be used to build tables of unique relative rock property relationships for any AVO model, which replace the unbiased, forward-model results.


2018 ◽  
Vol 48 (2) ◽  
pp. 431-447 ◽  
Author(s):  
Cristobal Young

The commenter’s proposal may be a reasonable method for addressing uncertainty in predictive modeling, where the goal is to predict y. In a treatment effects framework, where the goal is causal inference by conditioning-on-observables, the commenter’s proposal is deeply flawed. The proposal (1) ignores the definition of omitted-variable bias, thus systematically omitting critical kinds of controls; (2) assumes for convenience there are no bad controls in the model space, thus waving off the premise of model uncertainty; and (3) deletes virtually all alternative models to select a single model with the highest R 2. Rather than showing what model assumptions are necessary to support one’s preferred results, this proposal favors biased parameter estimates and deletes alternative results before anyone has a chance to see them. In a treatment effects framework, this is not model robustness analysis but simply biased model selection.


1992 ◽  
Vol 17 (1) ◽  
pp. 51-74 ◽  
Author(s):  
Clifford C. Clogg ◽  
Eva Petkova ◽  
Edward S. Shihadeh

We give a unified treatment of statistical methods for assessing collapsibility in regression problems, including some possible extensions to the class of generalized linear models. Terminology is borrowed from the contingency table area where various methods for assessing collapsibility have been proposed. Our procedures, however, can be motivated by considering extensions, and alternative derivations, of common procedures for omitted-variable bias in linear regression. Exact tests and interval estimates with optimal properties are available for linear regression with normal errors, and asymptotic procedures follow for models with estimated weights. The methods given here can be used to compareβ1 and β2 in the common setting where the response function is first modeled asXβ1(reduced model) and then asXβ2+Zγ(full model), withZ a vector of covariates omitted from the reduced model. These procedures can be used in experimental settings (X= randomly asigned treatments,Z= covariates) or in nonexperimental settings where two models viewed as alternative behavioral or structural explanations are compared (one model withX only, another model withX andZ).


2007 ◽  
Vol 7 (1) ◽  
pp. 149-158 ◽  
Author(s):  
Allen Hicken

I have written elsewhere: “Where there exists a critical mass of scholars working on similar sets of questions—critiquing and building on one another's work—knowledge accumulation is more likely to occur.”1 It is with this statement in mind that I proceed with my response to Michael Nelson's thoughtful critique on my previous article (see Allen Hicken, “Party Fabrication: Constitutional Reform and the Rise of Thai Rack Thai,” Journal of East Asian Studies 6, no. 3 [2006]: 381–407). Rather than a point-by-point rebuttal, I will focus on three of the most interesting and challenging of Nelson's theoretical critiques. The first substantive issue concerns the charge of omitted variable bias—specifically, in reference to the omission of local political groups from a macro-institutional account. The second and third criticisms are more methodological. First, can we or should we ascribe motives to political actors? Second, how can we use counterfactuals to solve problems of observational equivalence?


10.3982/qe689 ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 1619-1657 ◽  
Author(s):  
Karim Chalak

This paper studies measuring various average effects of X on Y in general structural systems with unobserved confounders U, a potential instrument Z, and a proxy W for U. We do not require X or Z to be exogenous given the covariates or W to be a perfect one‐to‐one mapping of U. We study the identification of coefficients in linear structures as well as covariate‐conditioned average nonparametric discrete and marginal effects (e.g., average treatment effect on the treated), and local and marginal treatment effects. First, we characterize the bias, due to the omitted variables U, of (nonparametric) regression and instrumental variables estimands, thereby generalizing the classic linear regression omitted variable bias formula. We then study the identification of the average effects of X on Y when U may statistically depend on X and Z. These average effects are point identified if the average direct effect of U on Y is zero, in which case exogeneity holds, or if W is a perfect proxy, in which case the ratio (contrast) of the average direct effect of U on Y to the average effect of U on W is also identified. More generally, restricting how the average direct effect of U on Y compares in magnitude and/or sign to the average effect of U on W can partially identify the average effects of X on Y. These restrictions on confounding are weaker than requiring benchmark assumptions, such as exogeneity or a perfect proxy, and enable a sensitivity analysis. After discussing estimation and inference, we apply this framework to study earnings equations.


1995 ◽  
Vol 10 (4) ◽  
pp. 719-749 ◽  
Author(s):  
Anne Beatty ◽  
Sandra Chamberlain ◽  
Joseph Magliolo

A number of studies have examined the correlation between financial statement disclosures and share prices to assess the informativeness of these disclosures. There are several potential econometric problems with analyses of this type, and the interpretations of the results depend critically on the type of econometric problem. For example, the results of these studies should not be used to answer accounting policy questions unless the effect of an omitted variable bias is likely to be minimal. Given potential interpretation problems, we argue that analysis of model misspecification should be performed to isolate the form of misspecification. The contribution of this paper is to suggest a series of tests to perform this task. We use these tests to assess the importance of misspecification in adaptations of Barth's (1994) investment securities valuation model and Beaver et al.'s (1989) model of loan loss valuation. We find compelling evidence of the importance of misspecification apart from measurement error (e.g., omitted variables) in the model of investment securities valuation, but find only weak evidence of any misspecification other than measurement error in the loan loss valuation model.


Sign in / Sign up

Export Citation Format

Share Document