In psycholinguistics, there has been relatively little work investigating conceptualisation –how speakers decide which concepts to express. This contrasts with work in natural language generation (NLG), a subfield of AI, where much research has explored content determination during the generation of referring expressions. Existing NLG algorithms for conceptualisation during reference production do not fully explain previous psycholinguistic results, so we developed new models that we tested in three language production experiments.In our experiments, participants described target objects to another participant. In Experiment 1, either its size, its colour, or both its size and colour distinguished the target from all distractor objects; in Experiment 2, either colour, type or both colour and type distinguished it from all distractors; In Experiment 3, either colour, size or the border around the object distinguished the target. We tested how well the different models fit the distribution of description types (e.g., “small candle”, “grey candle”, “small grey candle”) that participants produced.Across these experiments, the PRO model provided the best fit. In this model, speakers first choose a property that rules out all distractors. If there is more than one such property, then they probabilistically choose one based on a preference for that property. Next, they sometimes add another property, with the probability again determined by its preference and speakers’ eagerness to overspecify.