Mode of Input in Long-Term Recognition

1977 ◽  
Vol 44 (3) ◽  
pp. 736-738 ◽  
Author(s):  
Neil H. Schwartz ◽  
A. Alexander Garabedian ◽  
Raymond S. Dean ◽  
Frank R. Yekovich

Recent research by Garabedian, Yekovich, Sherman, and Dean has demonstrated that recognition from long-term memory was superior for words presented visually over those presented auditorily. Because their results could have been influenced by the salience of the projected visual items, the present investigation attempted to eliminate the possible visual bias. 16 undergraduates were presented a list of nouns of mixed input modality and were tested 6 min. later for incidental recognition of input mode. The present data corroborated findings by Garabedian, et al. who postulated that subjects use some form of stored visual information when identifying the input mode of words.

2020 ◽  
Author(s):  
John J Shaw ◽  
Zhisen Urgolites ◽  
Padraic Monaghan

Visual long-term memory has a large and detailed storage capacity for individual scenes, objects, and actions. However, memory for combinations of actions and scenes is poorer, suggesting difficulty in binding this information together. Sleep can enhance declarative memory of information, but whether sleep can also boost memory for binding information and whether the effect is general across different types of information is not yet known. Experiments 1 to 3 tested effects of sleep on binding actions and scenes, and Experiments 4 and 5 tested binding of objects and scenes. Participants viewed composites and were tested 12-hours later after a delay consisting of sleep (9pm-9am) or wake (9am-9pm), on an alternative forced choice recognition task. For action-scene composites, memory was relatively poor with no significant effect of sleep. For object-scene composites sleep did improve memory. Sleep can promote binding in memory, depending on the type of information to be combined.


2019 ◽  
Vol 28 (1) ◽  
pp. 65-77
Author(s):  
Cyntia Diógenes Ferreira ◽  
Maria José Nunes Gadelha ◽  
Égina Karoline Gonçalves da Fonsêca ◽  
Joenilton Saturnino Cazé da Silva ◽  
Nelson Torro ◽  
...  

2011 ◽  
Vol 23 (11) ◽  
pp. 3540-3554 ◽  
Author(s):  
Patrick H. Khader ◽  
Thorsten Pachur ◽  
Stefanie Meier ◽  
Siegfried Bien ◽  
Kerstin Jost ◽  
...  

Many of our daily decisions are memory based, that is, the attribute information about the decision alternatives has to be recalled. Behavioral studies suggest that for such decisions we often use simple strategies (heuristics) that rely on controlled and limited information search. It is assumed that these heuristics simplify decision-making by activating long-term memory representations of only those attributes that are necessary for the decision. However, from behavioral studies alone, it is unclear whether using heuristics is indeed associated with limited memory search. The present study tested this assumption by monitoring the activation of specific long-term-memory representations with fMRI while participants made memory-based decisions using the “take-the-best” heuristic. For different decision trials, different numbers and types of information had to be retrieved and processed. The attributes consisted of visual information known to be represented in different parts of the posterior cortex. We found that the amount of information required for a decision was mirrored by a parametric activation of the dorsolateral PFC. Such a parametric pattern was also observed in all posterior areas, suggesting that activation was not limited to those attributes required for a decision. However, the posterior increases were systematically modulated by the relative importance of the information for making a decision. These findings suggest that memory-based decision-making is mediated by the dorsolateral PFC, which selectively controls posterior storage areas. In addition, the systematic modulations of the posterior activations indicate a selective boosting of activation of decision-relevant attributes.


2020 ◽  
Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer ◽  
George Alvarez

Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli – colors, orientations – is encoded into working memory rapidly: in under 100ms, working memory ‘fills up’, revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: with increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed capacity working memory models, and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.


2017 ◽  
Vol 26 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Stephen Darling ◽  
Richard J. Allen ◽  
Jelena Havelka

Visuospatial bootstrapping is the name given to a phenomenon whereby performance on visually presented verbal serial-recall tasks is better when stimuli are presented in a spatial array rather than a single location. However, the display used has to be a familiar one. This phenomenon implies communication between cognitive systems involved in storing short-term memory for verbal and visual information, alongside connections to and from knowledge held in long-term memory. Bootstrapping is a robust, replicable phenomenon that should be incorporated in theories of working memory and its interaction with long-term memory. This article provides an overview of bootstrapping, contextualizes it within research on links between long-term knowledge and short-term memory, and addresses how it can help inform current working memory theory.


2016 ◽  
Vol 113 (27) ◽  
pp. 7459-7464 ◽  
Author(s):  
Timothy F. Brady ◽  
Viola S. Störmer ◽  
George A. Alvarez

Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.


2021 ◽  
Author(s):  
Joseph M. Saito ◽  
Matthew Kolisnyk ◽  
Keisuke Fukuda

Despite the massive capacity of visual long-term memory, individuals do not successfully encode all visual information they wish to remember. This variability in encoding success has been traditionally ascribed to fluctuations in individuals’ cognitive states (e.g., sustained attention) and differences in memory encoding processes (e.g., depth of encoding). However, recent work has shown that a considerable amount of variability in encoding success stems from intrinsic stimulus properties that determine the ease of encoding across individuals. While researchers have identified several perceptual and semantic properties that contribute to this stimulus memorability phenomenon, much remains unknown, including whether individuals are aware of the memorability of stimuli they encounter. In the present study, we investigated whether individuals have conscious access to the memorability of real-world stimuli while forming self-referential judgments of learning (JOL) during explicit memory encoding (Experiments 1A-B) and when asked about the perceived memorability of a stimulus in the absence of attempted encoding (Experiments 2A-B). We found that both JOLs and perceived memorability estimates were consistent across individuals and reliably predicted stimulus memorability. However, this apparent access to the properties that define memorability was not comprehensive. Individuals unexpectedly remembered and forgot consistent sets of stimuli as well. Thus, our findings demonstrate that individuals have conscious access to some—but not all—aspects of stimulus memorability and that this access exists regardless of the present demands on stimulus encoding.


2016 ◽  
Vol 39 ◽  
Author(s):  
Mary C. Potter

AbstractRapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.


Sign in / Sign up

Export Citation Format

Share Document