Vivid Imagers Are Better at Detecting Salient Changes

2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.

2017 ◽  
Author(s):  
Katherine Wood ◽  
Daniel J. Simons

How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the post-change display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a six-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the pre-change array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.


Author(s):  
Mitchell R. P. LaPointe ◽  
Rachael Cullen ◽  
Bianca Baltaretu ◽  
Melissa Campos ◽  
Natalie Michalski ◽  
...  

2015 ◽  
Vol 114 (5) ◽  
pp. 2637-2648 ◽  
Author(s):  
Fabrice Arcizet ◽  
Koorosh Mirpour ◽  
Daniel J. Foster ◽  
Caroline J. Charpentier ◽  
James W. Bisley

When looking around at the world, we can only attend to a limited number of locations. The lateral intraparietal area (LIP) is thought to play a role in guiding both covert attention and eye movements. In this study, we tested the involvement of LIP in both mechanisms with a change detection task. In the task, animals had to indicate whether an element changed during a blank in the trial by making a saccade to it. If no element changed, they had to maintain fixation. We examine how the animal's behavior is biased based on LIP activity prior to the presentation of the stimulus the animal must respond to. When the activity was high, the animal was more likely to make an eye movement toward the stimulus, even if there was no change; when the activity was low, the animal either had a slower reaction time or maintained fixation, even if a change occurred. We conclude that LIP activity is involved in both covert and overt attention, but when decisions about eye movements are to be made, this role takes precedence over guiding covert attention.


2021 ◽  
Author(s):  
Ilenia Paparella ◽  
Liuba Papeo

Working memory (WM) uses knowledge and relations to organize and store multiple individual items in a smaller set of structured units, or chunks. We investigated whether a crowd of individuals that exceeds the WM is retained and, therefore, recognized more accurately, if individuals are represented as interacting with one another –i.e., they form social chunks. Further, we asked what counts as a social chunk in WM: two individuals involved in a meaningful interaction or just spatially close and face-to-face. In three experiments with a delayed change-detection task, participants had to report whether a probe-array was the same of, or different from a sample-array featuring two or three dyads of bodies either face-to-face (facing array) or back-to-back (non-facing array). In Experiment 1, where facing dyads depicted coherent, meaningful interactions, participants were more accurate to detect changes in facing (vs. non-facing) arrays. A similar advantage was found in Experiment 2, even though facing dyads depicted no meaningful interaction. In Experiment 3, we introduced a secondary task (verbal shadowing) to increase WM load. This manipulation abolished the advantage of facing (vs. non-facing) arrays, only when facing dyads depicted no meaningful interactions. These results show that WM uses representation of interaction to chunk crowds in social groups. The mere facingness of bodies is sufficient on its own to evoke representation of interaction, thus defining a social chunk in WM; although the lack of semantic anchor makes chunking fainter and more susceptible to interference of a secondary task.


Sign in / Sign up

Export Citation Format

Share Document