scholarly journals What’s in a chunk? Chunking and data compression in verbal short-term memory

2018 ◽  
Author(s):  
Dennis Graham Norris ◽  
Kristjan Kalm

Short-term verbal memory is improved when words in the input can be chunked into larger units. Miller (1956) suggested that the capacity of verbal short-term memory is determined by the number of chunks that can be stored in memory, and not by the number of items or the amount of information. But how does the improvement due to chunking come about? Is memory really determined by the number of chunks? One possibility is that chunking is a form of data compression. Chunking allows more information to be stored in the available capacity. An alternative is that chunking operates primarily by redintegration. Chunks exist only in long-term memory, and enable items in the input which correspond to chunks to be reconstructed more reliably from a degraded trace. We review the data favoring each of these views and discuss the implications of treating chunking as data compression. Contrary to Miller, we suggest that memory capacity is primarily determined by the amount of information that can be stored. However, given the limitations on the representations that can be stored in verbal short-term memory, chunking can sometimes allow the information capacity of short-term memory to be exploited more efficiently.

2018 ◽  
Author(s):  
Dennis Graham Norris ◽  
Kristjan Kalm

Memory for verbal material improves when words form familiar chunks. But how does theimprovement due to chunking come about? Two possible explanations are that the inputmight be actively recoded into chunks, each of which takes up less memory capacity thanitems not forming part of a chunk (a form of data compression), or that chunking is basedon redintegration. If chunking is achieved by redintegration, representations of chunks existonly in long-term memory and help to reconstructing degraded traces in short-termmemory. In six experiments using two-alternative forced choice recognition and immediateserial recall, we find that when chunks are small (two words) they display a patternsuggestive of redintegration, while larger chunks (three words), show a pattern consistentwith data compression. This is concurs with previous data showing that there is a costinvolved in recoding material into chunks in short-term memory. With smaller chunks thiscost seems to outweigh the benefits of recoding words into chunks. The main features ofthe serial recall data can be captured by a simple extension to the Primacy model of Pageand Norris (1998).


Author(s):  
Stoo Sepp ◽  
Steven J. Howard ◽  
Sharon Tindall-Ford ◽  
Shirley Agostinho ◽  
Fred Paas

In 1956, Miller first reported on a capacity limitation in the amount of information the human brain can process, which was thought to be seven plus or minus two items. The system of memory used to process information for immediate use was coined “working memory” by Miller, Galanter, and Pribram in 1960. In 1968, Atkinson and Shiffrin proposed their multistore model of memory, which theorized that the memory system was separated into short-term memory, long-term memory, and the sensory register, the latter of which temporarily holds and forwards information from sensory inputs to short term-memory for processing. Baddeley and Hitch built upon the concept of multiple stores, leading to the development of the multicomponent model of working memory in 1974, which described two stores devoted to the processing of visuospatial and auditory information, both coordinated by a central executive system. Later, Cowan’s theorizing focused on attentional factors in the effortful and effortless activation and maintenance of information in working memory. In 1988, Cowan published his model—the scope and control of attention model. In contrast, since the early 2000s Engle has investigated working memory capacity through the lens of his individual differences model, which does not seek to quantify capacity in the same way as Miller or Cowan. Instead, this model describes working memory capacity as the interplay between primary memory (working memory), the control of attention, and secondary memory (long-term memory). This affords the opportunity to focus on individual differences in working memory capacity and extend theorizing beyond storage to the manipulation of complex information. These models and advancements have made significant contributions to understandings of learning and cognition, informing educational research and practice in particular. Emerging areas of inquiry include investigating use of gestures to support working memory processing, leveraging working memory measures as a means to target instructional strategies for individual learners, and working memory training. Given that working memory is still debated, and not yet fully understood, researchers continue to investigate its nature, its role in learning and development, and its implications for educational curricula, pedagogy, and practice.


1970 ◽  
Vol 22 (2) ◽  
pp. 261-273 ◽  
Author(s):  
T. Shallice ◽  
Elizabeth K. Warrington

Five experiments are described concerning verbal short-term memory performance of a patient who has a very markedly reduced verbal span. The results of the first three, free recall, the Peterson procedure and an investigation of proactive interference, indicate that he has a greatly reduced short-term memory capacity, while the last two, probe recognition and missing scan, show that this cannot be attributed to a retrieval failure. Since his performance on long-term memory tasks is normal, it is difficult to explain these results with theories of normal functioning in which verbal STM and LTM use the same structures in different ways. They also make the serial model of the relation between STM and LTM less plausible and support a model in which verbal STM and LTM have parallel inputs.


2016 ◽  
Vol 39 ◽  
Author(s):  
Mary C. Potter

AbstractRapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.


2020 ◽  
Vol 29 (4) ◽  
pp. 710-727
Author(s):  
Beula M. Magimairaj ◽  
Naveen K. Nagaraj ◽  
Alexander V. Sergeev ◽  
Natalie J. Benafield

Objectives School-age children with and without parent-reported listening difficulties (LiD) were compared on auditory processing, language, memory, and attention abilities. The objective was to extend what is known so far in the literature about children with LiD by using multiple measures and selective novel measures across the above areas. Design Twenty-six children who were reported by their parents as having LiD and 26 age-matched typically developing children completed clinical tests of auditory processing and multiple measures of language, attention, and memory. All children had normal-range pure-tone hearing thresholds bilaterally. Group differences were examined. Results In addition to significantly poorer speech-perception-in-noise scores, children with LiD had reduced speed and accuracy of word retrieval from long-term memory, poorer short-term memory, sentence recall, and inferencing ability. Statistically significant group differences were of moderate effect size; however, standard test scores of children with LiD were not clinically poor. No statistically significant group differences were observed in attention, working memory capacity, vocabulary, and nonverbal IQ. Conclusions Mild signal-to-noise ratio loss, as reflected by the group mean of children with LiD, supported the children's functional listening problems. In addition, children's relative weakness in select areas of language performance, short-term memory, and long-term memory lexical retrieval speed and accuracy added to previous research on evidence-based areas that need to be evaluated in children with LiD who almost always have heterogenous profiles. Importantly, the functional difficulties faced by children with LiD in relation to their test results indicated, to some extent, that commonly used assessments may not be adequately capturing the children's listening challenges. Supplemental Material https://doi.org/10.23641/asha.12808607


1978 ◽  
Vol 10 (2) ◽  
pp. 141-148
Author(s):  
Mary Anne Herndon

In a model of the functioning of short term memory, the encoding of information for subsequent storage in long term memory is simulated. In the encoding process, semantically equivalent paragraphs are detected for recombination into a macro information unit. This recombination process can be used to relieve the limited storage capacity constraint of short term memory and subsequently increase processing efficiency. The results of the simulation give a favorable indication of the success for the use of cluster analysis as a tool to simulate the encoding function in the detection of semantically similar paragraphs.


2017 ◽  
Vol 14 (1) ◽  
pp. 172988141769231 ◽  
Author(s):  
Ning An ◽  
Shi-Ying Sun ◽  
Xiao-Guang Zhao ◽  
Zeng-Guang Hou

Visual tracking is a challenging computer vision task due to the significant observation changes of the target. By contrast, the tracking task is relatively easy for humans. In this article, we propose a tracker inspired by the cognitive psychological memory mechanism, which decomposes the tracking task into sensory memory register, short-term memory tracker, and long-term memory tracker like humans. The sensory memory register captures information with three-dimensional perception; the short-term memory tracker builds the highly plastic observation model via memory rehearsal; the long-term memory tracker builds the highly stable observation model via memory encoding and retrieval. With the cooperative models, the tracker can easily handle various tracking scenarios. In addition, an appearance-shape learning method is proposed to update the two-dimensional appearance model and three-dimensional shape model appropriately. Extensive experimental results on a large-scale benchmark data set demonstrate that the proposed method outperforms the state-of-the-art two-dimensional and three-dimensional trackers in terms of efficiency, accuracy, and robustness.


2005 ◽  
Vol 85 (1) ◽  
pp. 8-18 ◽  
Author(s):  
Jill C Heathcock ◽  
Anjana N Bhat ◽  
Michele A Lobo ◽  
James (Cole) Galloway

Abstract Background and Purpose. Infants born preterm differ in their spontaneous kicking, as well as their learning and memory abilities in the mobile paradigm, compared with infants born full-term. In the mobile paradigm, a supine infant's ankle is tethered to a mobile so that leg kicks cause a proportional amount of mobile movement. The purpose of this study was to investigate the relative kicking frequency of the tethered (right) and nontethered (left) legs in these 2 groups of infants. Subjects. Ten infants born full-term and 10 infants born preterm (<33 weeks gestational age, <2,500 g) and 10 comparison infants participated in the study. Methods. The relative kicking frequencies of the tethered and nontethered legs were analyzed during learning and short-term and long-term memory periods of the mobile paradigm. Results. Infants born full-term showed an increase in the relative kicking frequency of the tethered leg during the learning period and the short-term memory period but not for the long-term memory period. Infants born preterm did not show a change in kicking pattern for learning or memory periods, and consistently kicked both legs in relatively equal amounts. Discussion and Conclusion. Infants born full-term adapted their baseline kicking frequencies in a task-specific manner to move the mobile and then retained this adaptation for the short-term memory period. In contrast, infants born preterm showed no adaptation, suggesting a lack of purposeful leg control. This lack of control may reflect a general decrease in the ability of infants born preterm to use their limb movements to interact with their environment. As such, the mobile paradigm may be clinically useful in the early assessment and intervention of infants born preterm and at risk for future impairment.


1974 ◽  
Vol 38 (2) ◽  
pp. 495-501
Author(s):  
Gilbert B. Tunnell ◽  
Philippe R. Falkenberg

Manipulation of the context in a short-term memory paradigm produces changes in the ability to recognize the same material from long-term memory 24 hr. later. If immediate recall is accurate, later recognition is improved if this recall is conducted with the same context as occurred at learning. If immediate recall is completely inaccurate, later recognition is improved if this recall is conducted with different context than was present at learning. Short-term recall did not need to be accurate to transfer the learned nonsense trigrams to long-term memory. Manipulation of context 24 hr. after learning had no effect on recognition. Results are discussed in terms of the Waugh and Norman memory model, Tulving's encoding specificity hypothesis, and interference theory.


Sign in / Sign up

Export Citation Format

Share Document