scholarly journals Current trends in working memory research: evidence from functional neuroimaging

Author(s):  
Bledowski Christoph
2010 ◽  
Vol 41 (01) ◽  
Author(s):  
C Rottschy ◽  
S Eickhoff ◽  
I Dogan ◽  
A Laird ◽  
P Fox ◽  
...  

2007 ◽  
Vol 362 (1481) ◽  
pp. 761-772 ◽  
Author(s):  
Mark D'Esposito

Working memory refers to the temporary retention of information that was just experienced or just retrieved from long-term memory but no longer exists in the external environment. These internal representations are short-lived, but can be stored for longer periods of time through active maintenance or rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goal-directed behaviour. Empirical studies of working memory using neuroscientific techniques, such as neuronal recordings in monkeys or functional neuroimaging in humans, have advanced our knowledge of the underlying neural mechanisms of working memory. This rich dataset can be reconciled with behavioural findings derived from investigating the cognitive mechanisms underlying working memory. In this paper, I review the progress that has been made towards this effort by illustrating how investigations of the neural mechanisms underlying working memory can be influenced by cognitive models and, in turn, how cognitive models can be shaped and modified by neuroscientific data. One conclusion that arises from this research is that working memory can be viewed as neither a unitary nor a dedicated system. A network of brain regions, including the prefrontal cortex (PFC), is critical for the active maintenance of internal representations that are necessary for goal-directed behaviour. Thus, working memory is not localized to a single brain region but probably is an emergent property of the functional interactions between the PFC and the rest of the brain.


2020 ◽  
Vol 28 (5-8) ◽  
pp. 325-329
Author(s):  
Christian N. L. Olivers ◽  
Stefan Van der Stigchel

2005 ◽  
Vol 50 (2) ◽  
pp. 739-752 ◽  
Author(s):  
Akira Mizuno

AbstractThis paper attempts to combine interpreting studies with working memory research and propose a theoretical framework for the process model of simultaneous interpreting. First, the embedded model of working memory by Cowan is introduced as the most promising model to account for various phenomena of simultaneous interpreting. This is followed by a description of the functions of components of the model and the nature of information maintained in the working memory. The model then is applied to a small corpus of simultaneous interpreting in an attempt to explain the load-reduction strategies employed by interpreters who perform simultaneous interpreting between Japanese and English and the translation failures due to overloading of the working memory.


2016 ◽  
Vol 47 (1) ◽  
pp. 51-61 ◽  
Author(s):  
Konrad Kulikowski ◽  
Katarzyna Potasz-Kulikowska

Abstract The aim of this study was to check whether an online n-back task conducted in the uncontrolled environment of the Internet can yield valid and reliable data. For this purpose, 169 participants completed an online n-back task with n1, n2 and n3 blocks on their home computers. The results have shown acceptable reliability for overall accuracy and reaction time indices across n1, n2, n3 blocks, as well as for reaction time indices for each n block. Unacceptable reliability has been found for separate n levels accuracy indices and for response bias indices. Confirmatory factor analysis has revealed that, among 8 proposed measurement models, the best fit for the data collected is a model with two uncorrelated factors: accuracy consisting of n1, n2, n3 indices and reaction time consisting of n2, n3 indices. The results of this study have demonstrated for the first time that a reliable administration of online n-back task is possible and may therefore give rise to new opportunities for working memory research.


2017 ◽  
Vol 33 (3) ◽  
pp. 291-297 ◽  
Author(s):  
Michael Sharwood Smith

Working memory is generally understood to refer to a limited storage facility for information temporarily needed during online processing. It figures with increasing frequency both in studies on second language development and more widely in research on bilingual and multilingual acquisition and attrition studies. The importance of the concept to our understanding justifies the appearance of this special issue, in which both general and specifically second language (L2) oriented topics related to working memory are discussed. Unsurprisingly, working memory is a theoretical concept that remains subject to controversy since we still have much to learn about how the mind and brain work. Many researchers do not do research that focuses on the nature of memory itself but at the same time still rely on the concept and the various types of related measures that have been developed in psychology for their own investigations: for these researchers, it is still important to keep abreast of developments in memory research both within and beyond their own area.


2004 ◽  
Vol 16 (2) ◽  
pp. 289-300 ◽  
Author(s):  
Philip Nixon ◽  
Jenia Lazarova ◽  
Iona Hodinott-Hill ◽  
Patricia Gough ◽  
Richard Passingham

Repetitive transcranial magnetic stimulation (rTMS) offers a powerful new technique for investigating the distinct contributions of the cortical language areas. We have used this method to examine the role of the left inferior frontal gyrus (IFG) in phonological processing and verbal working memory. Functional neuroimaging studies have implicated the posterior part of the left IFG in both phonological decision making and subvocal rehearsal mechanisms, but imaging is a correlational method and it is therefore necessary to determine whether this region is essential for such processes. In this paper we present the results of two experiments in which rTMS was applied over the frontal operculum while subjects performed a delayed phonological matching task. We compared the effects of disrupting this area either during the delay (memory) phase or at the response (decision) phase of the task. Delivered at a time when subjects were required to remember the sound of a visually presented word, rTMS impaired the accuracy with which they subsequently performed the task. However, when delivered later in the trial, as the subjects compared the remembered word with a given pseudoword, rTMS did not impair accuracy. Performance by the same subjects on a control task that required the processing of nonverbal visual stimuli was unaffected by the rTMS. Similarly, performance on both tasks was unaffected by rTMS delivered over a more anterior site (pars triangularis). We conclude that the opercular region of the IFG is necessary for the normal operation of phonologically based working memory mechanisms. Furthermore, this study shows that rTMS can shed further light on the precise role of cortical language areas in humans.


2021 ◽  
Author(s):  
Timothy F. Brady ◽  
Maria Martinovna Robinson ◽  
Jamal Rodgers Williams ◽  
John Wixted

There is a crisis of measurement in memory research, with major implications for theory and practice. This crisis arises because of a critical complication present when measuring memory using the recognition memory task that dominates the study of working memory and long-term memory (“did you see this item? yes/no” or “did this item change? yes/no”). Such tasks give two measures of performance, the “hit rate” (how often you say you previously saw an item you actually did previously see) and the “false alarm rate” (how often you say you saw something you never saw). Yet what researchers want is one single, integrated measure of memory performance. Integrating the hit and false alarm rate into a single measure, however, requires a complex problem of counterfactual reasoning that depends on the (unknowable) distribution of underlying memory signals: when faced with two people differing in both hit rate and false alarm rate, the question of who had the better memory is really “who would have had more hits if they each had the same number of false alarms”. As a result of this difficulty, different literatures in memory research (e.g., visual working memory, eyewitness identification, picture memory, etc) have settled on a variety of distinct metrics to combine hit rates and false alarm rates (e.g., A’, corrected hit rate, percent correct, d’, diagnosticity ratios, K values, etc.). These metrics make different, contradictory assumptions about the distribution of latent memory signals, and all of their assumptions are frequently incorrect. Despite a large literature on how to properly measure memory performance, spanning decades, real-life decisions are often made using these metrics, even when they subsequently turn out to be wrong when memory is studied with better measures. We suggest that in order for the psychology and neuroscience of memory to become a cumulative, theory-driven science, more attention must be given to measurement issues. We make a concrete suggestion: the default memory task should change from old/new (“did you see this item’?”) to forced-choice (“which of these two items did you see?”). In situations where old/new variants are preferred (e.g., eyewitness identification; theoretical investigations of the nature of memory decisions), receiver operating characteristic (ROC) analysis should always be performed.


Sign in / Sign up

Export Citation Format

Share Document