Impact Visualisation in Educational Interventions
Reporting of research data analysis often resorts to numerical summaries, such as effect size estimates in Randomised Controlled Trials (RCTs). Summary statistics are helpful and important for evidence synthesis and decision making. However, they can be unstable and inconsistent due to diversity in research designs and variability in analytical specifications. They also mask the dynamics of individual responses to a certain intervention by focusing on average treatment effect on the treated, even though the variation in impact may be crucial information for policy makers. To establish stability and consistency of impact estimates and to reveal the dynamics of individual responses in RCTs, we conduct variable selection, harness the power of noise, implement Cumulative Quantile Analysis (CQA), and devise umbrella plots of loss and gain in this study, using real datasets from over 30 educational interventions funded by the Education Endowment Foundation (EEF) in England. For the purpose of comparison, which is essential in data visualisation, all the aforementioned methods are built upon multiple analytical approaches. We show that the importance of an intervention can be ordered through variable selection, and that the power of noise or the bias induced by inappropriate variables, can be utilised to assess the stability of an impact estimate. We also demonstrate that estimates of average treatment effect cannot fully capture the impact of an intervention on sub-groups of participants with varying levels of attainment at baseline, not to mention individual responses to the intervention. Using CQA and umbrella plots, we are able to supplement what common effect size estimates lack in educational interventions. We argue that the impact of an intervention is often more complex than the average treatment effect suggests, and that until a summary is more informative and able to speak directly to the eye, evidence-based policy and practice cannot be fully achieved.