Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularise it—removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularising behaviour in learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularisation reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidginisation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularisation (Good 2015;Siegel 2006; Slobin 1986). Here we provide the first systematic comparison of the strength of regularisation across these two linguistic levels. In line with previous studies, we find that the presence of a favoured variant can induce different degrees of regularisation. However, when input languages are carefully matched—with comparable initial variability, and no variant-specific biases—regularisation can be comparable across morphology and word order. This is the case regard-less of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularising mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggests this overarching mechanism is driven by production