Asymmetric learning facilitates human inference of transitive relations
Humans and other animals are capable of inferring never-experienced relations (e.g., A>C) from other relational observations (e.g., A>B and B>C). The processes behind such transitive inference are subject to intense research. Here, we demonstrate a new aspect of relational learning, building on previous evidence that transitive inference can be accomplished through simple reinforcement learning mechanisms. We show in simulations that inference of novel relations benefits from an asymmetric learning policy, where observers update only their belief about the winner (or loser) in a pair. Across 4 experiments (n=145), we find substantial empirical support for such asymmetries in inferential learning. The learning policy favoured by our simulations and experiments gives rise to a compression of values which is routinely observed in psychophysics and behavioural economics. In other words, a seemingly biased learning strategy that yields well-known cognitive distortions can be beneficial for transitive inferential judgments.