scholarly journals A Critique of the Random Intercept Cross-Lagged Panel Model

2021 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch

The random intercept cross-lagged panel model (RI-CLPM) is an extension of the traditional cross-lagged panel model (CLPM) that allows controlling for stable trait factors when estimating cross-lagged effects. It has been argued that the RI-CLPM more appropriately accounts for trait-like, time-invariant stability of many psychological constructs and that it should be preferred over the CLPM when at least three waves of measurement are available. The basic idea of the RI-CLPM is to decompose longitudinal associations between two constructs into stable between-person associations and temporal within-person dynamics. The present article critically examines the RI-CLPM from a causal inference perspective. Using formal analysis and simulated data, we show that the RI-CLPM has limited potential to control for unobserved stable confounder variables when estimating cross-lagged effects. The CLPM with additional lag-2 effects sufficiently controls for delayed effects, as long as all relevant covariates are measured. Furthermore, we clarify that, in general, the RI-CLPM targets a different causal estimand than the CLPM. Whereas the cross-lagged effect in the CLPM targets the effect of increasing the exposure by one unit, the within-person cross-lagged effect in the RI-CLPM provides an estimate of the effect of increasing the exposure by one unit around the person mean. We argue that this within-person causal effect is typically less relevant for testing causal hypotheses with longitudinal data because it only captures temporary fluctuations around the individual person means and ignores the potential effects of causes that explain differences between persons.

2020 ◽  
Author(s):  
Satoshi Usami

Many methods have been developed to infer reciprocal relations between longitudinally observed variables. Among them, the general cross-lagged panel model (GCLM) is the most recent development as a variant of the cross-lagged panel model (CLPM), while the random-intercept CLPM (RI-CLPM) has rapidly become a popular approach. In this article, we describe how common factors and cross-lagged parameters included in these models can be interpreted, using a unified framework that was recently developed. Because common factors are modeled with lagged effects in the GCLM, they have both direct and indirect influences on observed scores, unlike stable trait factors included in the RI-CLPM. This indicates that the GCLM does not control for stable traits as the RI-CLPM does, and that there are interpretative differences in cross-lagged parameters between these models. We also explain that including such common factors as well as moving-average terms in the GCLM makes this interpretation very complicated.


2021 ◽  
Author(s):  
Marie-Louise Kullberg ◽  
Charlotte C van Schie ◽  
Andrea Allegrini ◽  
Yasmin Iona Ahmadzadeh ◽  
Daniel Wechsler ◽  
...  

Objective. To elucidate associations between parental harsh discipline and child emotional and behavioural problems in monozygotic twins aged 9, 12 and 16 and to compare distinct approaches to causal inference.Method. Child reports of 5,698 identical twins from the Twins Early Development Study (TEDS) were analysed. We tested three types of longitudinal structural equation models: a cross-lagged panel model (CLPM), a random intercept CLPM (RI-CLPM) and a monozygotic twin difference version of the CLPM (MZD-CLPM). Results. Given the study aim to infer causation, interpretation of models focussed primarily on the magnitude and significance of cross-lagged associations. Behavioural problems resulted in harsher parental discipline across all models. In the CLPM, we found bidirectional effects between parental discipline behavioural problems at age 9 and 12. Point estimates of all other associations between parental harsh discipline and child emotional and behavioural problems were in the same direction but magnitude varied across models. In the MZD-CLPM, twin differences in harsh parental discipline at 9 predicted twin differences in emotional problems at 12. In the RI-CLPM, emotional problems at 12 predicted a reduction in harsh parental discipline at 16 within person. Conclusions. Findings can be interpreted as corroborating (but not definite) evidence in favour of a causal effect of child behavioural problems on later experienced harsh parental discipline. Yet, in light of the triangulated methods, results also illustrate divergence in the MZD-CLPM and RI-CLPM outcomes, and underline the importance of a well-defined research question, careful model selection and refining causal conclusions on within-person processes.


2020 ◽  
Vol 11 (3) ◽  
pp. 447-460
Author(s):  
Nan Hua

Purpose This paper aims to examine the impacts of IT capabilities on hotel competitiveness. Design/methodology/approach This study adapts and extends Hua et al. (2015) and O’Neill et al. (2008) by incorporating the specific measures of IT expenditures as proxies for the relevant IT capabilities to explore the impacts of IT capabilities on hotel competitiveness. Findings This study finds that expenditures on IT Labor, IT Systems and IT Websites exert different impacts on hotel competitiveness. In addition, IT capabilities exert both contemporary and lagged effects on hotel competitiveness. Originality/value This study is the first that uses financial data to capture direct measures of individual IT capabilities and tests the individual impacts of IT capabilities on hotel competitiveness from both contemporaneous and lagged perspectives. It uses a large same store sample of hotels in the USA from 2011 to 2017; as a result, the study results can be reasonably representative of the hotel population in the USA.


2012 ◽  
Vol 8 (1) ◽  
pp. 89-115 ◽  
Author(s):  
V. K. C. Venema ◽  
O. Mestre ◽  
E. Aguilar ◽  
I. Auer ◽  
J. A. Guijarro ◽  
...  

Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.


Author(s):  
Hugo Cogo-Moreira ◽  
Julia D. Gusmões ◽  
Juliana Y. Valente ◽  
Michael Eid ◽  
Zila M. Sanchez

AbstractThe present study investigated how intervention might alter the relationship between perpetrating violence and later drug use. A cluster-randomized controlled trial design involving 72 schools (38 intervention, 34 control) and 6390 students attending grades 7 and 8 was employed in Brazil. Drug use and violence were assessed at three points. A random-intercept cross-lagged panel model examined the reciprocal association between drug use and school violence domains across the three data collection waves. For both groups, we found that the cross-lagged effect of perpetration on further drug use in adolescents was stronger than the reverse, but the interrelationship was not statistically significant between #Tamojunto and control schools. The carry-over effects of drug use and violence were also not significantly different between groups. There is a lack of evidence showing that #Tamojunto can modify the dynamics between drug use and school violence across the 21-month period. The direction of the causal effect (i.e., the more perpetration behavior, the more subsequent drug use behavior) is present, but weak in both groups. The trial registration protocol at the national Brazilian Register of Clinical Trials (REBEC) is #RBR-4mnv5g.


2013 ◽  
Vol 45 (4) ◽  
pp. 925-944
Author(s):  
Ó. Thórisdóttir ◽  
M. Kiderlen

Wicksell's classical corpuscle problem deals with the retrieval of the size distribution of spherical particles from planar sections. We discuss the problem in a local stereology framework. Each particle is assumed to contain a reference point and the individual particle is sampled with an isotropic random plane through this reference point. Both the size of the section profile and the position of the reference point inside the profile are recorded and used to recover the distribution of the corresponding particle parameters. Theoretical results concerning the relationship between the profile and particle parameters are discussed. We also discuss the unfolding of the arising integral equations, uniqueness issues, and the domain of attraction relations. We illustrate the approach by providing reconstructions from simulated data using numerical unfolding algorithms.


2021 ◽  
Author(s):  
Savannah Boele ◽  
Stefanie Nelemans ◽  
Jaap J. A. Denissen ◽  
Peter Prinzie ◽  
Anne Bülow ◽  
...  

This multi-sample study tested bidirectional within-family associations between parental sup-port and adolescents’ depressive symptoms on varying measurement intervals: Daily (N = 244, Mage = 13.8, 38% male), two-weekly (N=256, Mage=14.5, 29% male), three-monthly (N=245, Mage=13.9, 38% males), annual (N=1,664, Mage=11.1, 51% male), and biennial (N=502, Mage=13.8, 48% male). Pre-registered random-intercept cross-lagged panel models (RI-CLPM) showed negative between- and within-family correlations. Although no within-family lagged effects were found from parental support to depressive symptoms at any time interval, de-pressive symptoms predicted decreased parental support two weeks and three months later. Effects were moderated by adolescents’ sex and neuroticism. Findings mainly supported ado-lescent-driven effects, and illustrate that within-family lagged effects may not generalize across timescales.


2021 ◽  
Author(s):  
Kimmo Eriksson ◽  
Kimmo Sorjonen ◽  
Daniel Falkstedt ◽  
Bo Melin ◽  
Gustav Nilsonne

Effects of education on intelligence are controversial. Earlier studies of longitudinal data have observed positive associations between level of education and a later measurement of intelligence, when statistically controlling for an earlier measurement of intelligence, and furthermore that this association is stronger among individuals with lower pre-education intelligence. Here we challenge the interpretation that these observations reflect a causal effect of education. We develop and analyze a mathematical model in which education is assumed to have zero effect on intelligence, showing that precisely the observed pattern of results arises as a statistical artefact due to measurement errors. Fitting our model to a dataset used in a prior study, we show that observed associations between education and intelligence are closely replicated in simulated data generated by our model. Thus, our reanalysis indicates that additional higher education does not cause an increase in intelligence. We discuss how positive findings in studies of policy changes and school-age cutoff are limited to basic education and may not generalize to higher education.


2020 ◽  
Vol 240 (2-3) ◽  
pp. 161-200
Author(s):  
Matthias Dütsch ◽  
Ralf Himmelreicher

AbstractIn this article we examine the correlation between characteristics of individuals, companies, and industries involved in low-wage labour in Germany and the risks workers face of earning hourly wages that are below the minimum-wage or low-wage thresholds. To identify these characteristics, we use the Structure of Earnings Survey (SES) 2014. The SES is a mandatory survey of companies which provides information on wages and working hours from about 1 million jobs and nearly 70,000 companies from all industries. This data allows us to present the first systematic analysis of the interaction of individual-, company-, and industry-level factors on minimum- and low-wage working in Germany. Using a descriptive analysis, we first give an overview of typical low-paying jobs, companies, and industries. Second, we use random intercept-only models to estimate the explanatory power of the individual, company, and industry levels. One main finding is that the influence of individual characteristics on wage levels is often overstated: Less than 25 % of the differences in the employment situation regarding being employed in minimum-wage or low-wage jobs can be attributed to the individual level. Third, we performed logistic and linear regression estimations to assess the risks of having a minimum- or low-wage job and the distance between a worker’s actual earnings and the minimum- or low-wage thresholds. Our findings allow us to conclude that several determinants related to individuals appear to suggest a high low-wage incidence, but in fact lose their explanatory power once controls are added for factors relating to the companies or industries that employ these individuals.


2013 ◽  
Vol 21 (4) ◽  
pp. 507-523 ◽  
Author(s):  
Ryan T. Moore ◽  
Sally A. Moore

In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion.


Sign in / Sign up

Export Citation Format

Share Document