Visual search mimics configural processing in associative learning
AbstractTheories of generalization distinguish between elemental and configural stimulus processing, depending on whether stimuli in a compound are processed independently or as distinct entities. Evidence for elemental processing comes from findings of summation in animals, where a compound of two stimuli that independently predict an outcome is deemed to be more predictive of the outcome than each stimulus alone. Configural processing, on the other hand, is supported by experiments that fail to find this effect when the compound is comprised of similar stimuli. In humans, by contrast, summation seems to be robust and independent of similarity. We show how these results are best explained by an alternative view in which generalization comes about from a visual search process in which subjects process the most predictive or salient stimulus in a compound. We offer empirical support for this theory in three human experiments on causal learning and formalize a new elemental visual search model based on reinforcement learning principles which can capture the present and previous data on generalization, bridging two different research areas in psychology into a unitary framework.