Variable study quality is a challenge for all the empirical sciences, but perhaps particularlyfor disciplines such as ecology where experimentation is frequently hampered by systemcomplexity, scale, and resourcing. The resulting heterogeneity, and the necessity ofsubsequently combining the results of different study designs, is a fundamental issue forevidence synthesis. We welcome the recognition of this issue by Christie et al. (2019), andtheir attempt to provide a generic approach to study quality assessment and meta-analyticweighting through an extensive simulation study. However, we have reservations about thetrue generality and usefulness of their derived study “accuracy weights”. First, the Christie etal. simulations rely on a single approach to effect size calculation, resulting in the oddconclusion that BACI designs are superior to RCTs, which are normally considered the goldstandard for causal inference. Second, so-called “study quality” scores have long beencriticised in the epidemiological literature for failing to accurately summarise individual,study-specific drivers of bias, and have been shown to be likely to retain bias and increasevariance relative to meta-regression approaches that explicitly model such drivers. Wesuggest that ecological meta-analysts spend more time critically, and transparently,appraising actual studies before synthesis, rather than relying on generic weights or weightingformulas to solve assumed issues; sensitivity analyses and hierarchical meta-regression arelikely to be key tools in this work.