AbstractModel-averaged regression coefficients have been criticized for averaging over a set of models with parameters that have different meanings from model to model. This criticism arises because of confusion between two different parameters estimated by the coefficients of a statistical model.Ever since Fisher, the textbook definition of a coefficient (a “differences in conditional means”) takes its meaning from probabilistic conditioning (P(Y|X)). Because the parameter estimated with probabilistic conditioning is conditional on a specific set of covariates, its meaning varies from model to model.The coefficients in many applied statistical models, however, take their meaning from causal conditioning (P(Y|do(X))) and these coefficients estimate causal effect parameters (or simply, causal effects or Average Treatment Effects). Causal effect parameters are also differences in conditional expectations, but the event conditioned on is not the set of covariates in a statistical model but a hypothetical intervention. Because an effect parameter takes its meaning from causal and not probabilistic conditioning, it is the same from model to model, and an averaged coefficient has a straightforward interpretation as an estimate of a causal effect.Because an effect parameter is the same from model to model, the estimates of the parameter will generally be biased. By contrast, with probabilistic conditioning, the coefficients are consistent estimates of their parameter in every model, but the parameter differs from model to model. Confounding and omitted variable bias, which are central to explanatory modeling, are meaningless in statistical modeling as mere description.The argument developed here only addresses the “different parameters” criticism of model-averaged coefficients and is not advocating model averaging more generally.