Predicting visual memory across images and within individuals
We only remember a fraction of what we see—including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image’s memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, N=706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, N=57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories.