Independent storage of real-world object features is visual rather than verbal in nature
Previous work has shown that semantically meaningful properties of visually presented real-world objects, such as their color, their state/configuration of their parts/pose, or the features that differentiate them from other exemplars of the same category category, are stored with a high degree of independence in long-term memory (e.g., are frequently swapped or misbound across objects). But is this feature independence due to the visual representation of the objects, or because of verbal encoding? Semantically meaningful features can also be labeled by distinct words, which can be recombined to produce independent descriptions of real-world object features. Here, we directly test how much of the pattern of feature independence arises from visual vs. verbal encoding. In two experiments, during the study phase we orthogonally varied the match or mismatch of state (e.g., open/closed) and color information between images of objects and their verbal descriptions (Experiment 1) or between images of two exemplars from the same category (Experiment 2). At test, observers had to choose a previously presented image or description in a 4-AFC task. Whereas in Experiment 1 we found quite a small effect of visual-verbal mismatch on memory for images, the effect of mismatch between exemplars in Experiment 2 was dramatic: memory for a feature was reasonably good when it matched between exemplars, but dropped to chance otherwise. Importantly, this effect was observed both for color and object state independently. We conclude that independent, feature-based storage of objects in long-term memory is provided primarily by visual representations with possible minor influences of verbal encoding.