Meta-researchers increasingly study biases in quantitative study outcomes (effect sizes) that emerge from questionable research practices (QRPs) in designing, running, analyzing, and reporting studies. Here, we introduce an extensible and modular C++ simulation framework called SAM (Science Abstract Model) that enables systematic study of the effects of QRPs and researchers’ degrees of freedom (p-hacking) on a host of outcomes across the different phases of quantitative studies that test hypotheses. SAM achieves this by modular modelling of different entities and processes involved in research, from study designs and inferential criteria, the data collection and analyses, to the submission and acceptance of manuscripts in a journal. We demonstrate the advantages of our approach by reproducing and extending the Bakker, van Dijk, and Wicherts (2012) simulation study that investigated the effects of various p-hacking methods and publication bias on meta-analytic outcomes. We showcase how SAM’s modularity and flexibility makes it possible to easily examine the original study by modifying, adding, or removing different components— e.g., publication bias, different significance levels, or meta-analytic metrics. We focus our illustration on the fundamental question of whether lowering alpha will reduce the biases in the scientific literature.