scholarly journals The impact of distractors on visual short-term memory representation in early visual areas

2013 ◽  
Vol 13 (9) ◽  
pp. 2-2
Author(s):  
K. Bettencourt ◽  
Y. Xu
2020 ◽  
Author(s):  
Jim Grange ◽  
Stuart Bryan Moore ◽  
Ed David John Berry

Visual short-term memory (vSTM) is often measured via continuous-report tasks whereby participants are presented with stimuli that vary along a continuous dimension (e.g., colour) with the goal of memorising the stimuli features. At test, participants are probed to recall the feature value of one of the memoranda in a continuous manner (e.g., by clicking on a colour wheel). The angular deviation between the participant response and the true feature value provides an estimate of recall---and hence, vSTM---precision. Two prominent models of performance on such tasks are the two- and three-component mixture models (Bays et al., 2009; Zhang & Luck, 2008). Both models decompose participant responses into probabilistic mixtures of: (1) responses to the true target value based on a noisy memory representation; (2) random guessing when memory fails. In addition, the three-component model proposes (3) responses to a non-target feature value (i.e., binding errors). Here we report the development of mixtur, an open-source package written for the statistical programming language R that facilitates the fitting of the 2- and 3-component mixture models to continuous report data. We also report the results of several simulations conducted to develop recommendations for researchers on trial numbers, set-sizes and memoranda similarity, as well as conducting parameter recovery and model recovery simulations. It is our hope that mixtur will lower the barrier of entry for utilising mixture modelling


NeuroImage ◽  
2020 ◽  
Vol 208 ◽  
pp. 116440 ◽  
Author(s):  
Aurore Menegaux ◽  
Felix J.B. Bäuerlein ◽  
Aliki Vania ◽  
Natan Napiorkowski ◽  
Julia Neitzel ◽  
...  

2016 ◽  
Author(s):  
Michele Veldsman ◽  
Daniel J. Mitchell ◽  
Rhodri Cusack

AbstractRecent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects.We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images.Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects.


2019 ◽  
Author(s):  
Bobby Stojanoski ◽  
Stephen M. Emrich ◽  
Rhodri Cusack

AbstractWe rely upon visual short-term memory (VSTM) for continued access to perceptual information that is no longer available. Despite the complexity of our visual environments, the majority of research on VSTM has focused on memory for lower-level perceptual features. Using more naturalistic stimuli, it has been found that recognizable objects are remembered better than unrecognizable objects. What remains unclear, however, is how semantic information changes brain representations in order to facilitate this improvement in VSTM for real-world objects. To address this question, we used a continuous report paradigm to assess VSTM (precision and guessing rate) while participants underwent functional magnetic resonance imaging (fMRI) to measure the underlying neural representation of 96 objects from 4 animate and 4 inanimate categories. To isolate semantic content, we used a novel image generation method that parametrically warps images until they are no longer recognizable while preserving basic visual properties. We found that intact objects were remembered with greater precision and a lower guessing rate than unrecognizable objects (this also emerged when objects were grouped by category and animacy). Representational similarity analysis of the ventral visual stream found evidence of category and animacy information in anterior visual areas during encoding only, but not during maintenance. These results suggest that the effect of semantic information during encoding in ventral visual areas boosts visual short-term memory for real-world objects.


Author(s):  
Kevin Dent

In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.


Sign in / Sign up

Export Citation Format

Share Document