Temporal integration of feature probability distributions in working memory
Humans are surprisingly good at learning the characteristics of their visual environment. Recent studies have revealed that not only can the visual system learn repeated features of visual search distractors, but their actual probability distributions. Search times were determined by the frequency of distractor features over consecutive search trials. Distractor distributions involve many exemplars on each trial, but whether observers can learn distributions where only a single exemplar from the distribution is presented on each trial is unknown. Here, we investigated potential learning of probability distributions of single targets during visual search. Over blocks of trials observers searched for an oddly-colored target that was drawn from either a Gaussian or uniform distribution. Not only was search influenced by the repetition of a target feature but more interestingly also by the probability of that feature within trial blocks. The same search targets, coming from the extremes of the two distributions were found significantly slower during the blocks where the distractors were drawn from a Gaussian distribution than from a uniform distribution indicating that observers were sensitive to the target probability determined by the distribution shape. In Experiment 2 we replicated the effect using binned distributions and revealed the limitations of target distribution encoding by using a more complex target distribution. Our results demonstrate detailed internal representations of target feature distributions and that the visual system integrates probability distributions of target colors over surprisingly long trial sequences.