scholarly journals Multilevel Analysis with Few Clusters: Improving Likelihood-Based Methods to Provide Unbiased Estimates and Accurate Inference

Author(s):  
Martin Elff ◽  
Jan Paul Heisig ◽  
Merlin Schaeffer ◽  
Susumu Shikano

Abstract Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units.

Author(s):  
Martin Elff ◽  
Jan Paul Heisig ◽  
Merlin Schaeffer ◽  
Susumu Shikano

Comparative political science has long worried about the performance of multilevel models when the number of upper-level units is small. Exacerbating these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that frequentist methods yield biased estimates and severely anti-conservative inference with small upper-level samples. Stegmueller recommends Bayesian techniques, which he claims to be superior in terms of both bias and inferential accuracy. In this paper, we reassess and refute these results. First, we formally prove that frequentist maximum likelihood estimators of coefficients are unbiased. The apparent bias found by Stegmueller is simply a manifestation of Monte Carlo Error. Second, we show how inferential problems can be overcome by using restricted maximum likelihood estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible without turning to Bayesian methods, even if the number of upper-level units is small.


1996 ◽  
Vol 33 (1) ◽  
pp. 73-85 ◽  
Author(s):  
Marco Vriens ◽  
Michel Wedel ◽  
Tom Wilms

The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared conceptually and empirically in a Monte Carlo study. The empirical comparison pertains to measures that assess parameter recovery, goodness-of-fit, and predictive accuracy. Most of the integrated conjoint segmentation methods outperform the two-stage clustering procedures under the conditions specified, in which a latent class procedure performs best. However, differences in predictive accuracy were small. The effects of degrees of freedom for error and the number of respondents were considerably smaller than those of number of segments, error variance, and within-segment heterogeneity.


2011 ◽  
Vol 19 (1) ◽  
pp. 87-102 ◽  
Author(s):  
Alexander V. Hirsch

This paper analyzes the use of ideal point estimates for testing pivot theories of lawmaking such as Krehbiel's (1998, Pivotal politics: A theory of U.S. lawmaking. Chicago, IL: University of Chicago) pivotal politics and Cox and McCubbins's (2005, Setting the Agenda: Responsible Party Government in the U.S. House of Representations. New York: Cambridge University Press) party cartel model. Among the prediction of pivot theories is that all pivotal legislators will vote identically on all successful legislation. Clinton (2007, Lawmaking and roll calls. Journal of Politics 69:455–67) argues that the estimated ideal points of the pivotal legislators are therefore predicted to be statistically indistinguishable and false when estimated from the set of successful final passage roll call votes, which implies that ideal point estimates cannot logically be used to test pivot theories. I show using Monte Carlo simulation that when pivot theories are augmented with probabilistic voting, Clinton's prediction only holds in small samples when voting is near perfect. I furthermore show that the predicted bias is unlikely to be consequential with U.S. Congressional voting data. My analysis suggests that the methodology of estimating ideal points to compute theoretically relevant quantities for empirical tests is not inherently flawed in the case of pivot theories.


Sign in / Sign up

Export Citation Format

Share Document