Short Article: Syntax and Serial Recall: How Language Supports Short-Term Memory for Order

2009 ◽  
Vol 62 (7) ◽  
pp. 1285-1293 ◽  
Author(s):  
Nick Perham ◽  
John E. Marsh ◽  
Dylan M. Jones

The extent to which familiar syntax supports short-term serial recall of visually presented six-item sequences was shown by the superior recall of lists in which item pairs appeared in the order of “adjective–noun” (items 1–2, 3–4, 5–6)—congruent with English syntax—compared to when the order of items within pairs was reversed. The findings complement other evidence suggesting that short-term memory is an assemblage of language processing and production processes more than it is a bespoke short-term memory storage system.

2017 ◽  
Vol 38 (01) ◽  
pp. 017-028 ◽  
Author(s):  
Irene Minkina ◽  
Samantha Rosenberg ◽  
Michelene Kalinyak-Fliszar ◽  
Nadine Martin

This article reviews existing research on the interactions between verbal short-term memory and language processing impairments in aphasia. Theoretical models of short-term memory are reviewed, starting with a model assuming a separation between short-term memory and language, and progressing to models that view verbal short-term memory as a cognitive requirement of language processing. The review highlights a verbal short-term memory model derived from an interactive activation model of word retrieval. This model holds that verbal short-term memory encompasses the temporary activation of linguistic knowledge (e.g., semantic, lexical, and phonological features) during language production and comprehension tasks. Empirical evidence supporting this model, which views short-term memory in the context of the processes it subserves, is outlined. Studies that use a classic measure of verbal short-term memory (i.e., number of words/digits correctly recalled in immediate serial recall) as well as those that use more intricate measures (e.g., serial position effects in immediate serial recall) are discussed. Treatment research that uses verbal short-term memory tasks in an attempt to improve language processing is then summarized, with a particular focus on word retrieval. A discussion of the limitations of current research and possible future directions concludes the review.


2005 ◽  
Vol 100 (2) ◽  
pp. 354-356 ◽  
Author(s):  
M. J. Brosnan

Serial recall tasks assess the capacity of verbal short-term memory. The perception of computing as an acquirable skill rather than a fixed ability affected performance upon computer-based serial recall tasks but did not affect performance on comparable pencil-and-paper tasks. Computerized versions of traditional assessments should control for this.


2011 ◽  
Vol 33 (3) ◽  
pp. 605-621 ◽  
Author(s):  
ELIZABETH M. KISSLING

ABSTRACTThe current study investigated native English and native Arabic speakers’ phonological short-term memory for sequences of consonants and vowels. Phonological short-term memory was assessed in immediate serial recall tasks conducted in Arabic and English for both groups. Participants (n= 39) heard series of six consonant–vowel syllables and wrote down what they recalled. Native speakers of English recalled the vowel series better than consonant series in English and in Arabic, which was not true of native Arabic speakers. An analysis of variance showed that there was an interaction between first language and phoneme type. The results are discussed in light of current research on consonant and vowel processing.


NeuroImage ◽  
2008 ◽  
Vol 42 (4) ◽  
pp. 1698-1713 ◽  
Author(s):  
S. Majerus ◽  
S. Belayachi ◽  
B. De Smedt ◽  
A.L. Leclercq ◽  
T. Martinez ◽  
...  

2019 ◽  
Author(s):  
Stefan Wiens

Marsh et al. (2018, Journal of Experimental Psychology-Learning Memory and Cognition, 44, 882-897) reported finding a dissociation between the effects of serial recall tasks and those of a missing-item task on the disruptive effects of speech and of emotional words, as predicted by the duplex-mechanism account. Critically, the reported analyses did not test specifically for this dissociation. To address this issue, I re-analyzed the Marsh et al. data and added Bayesian hypothesis tests to measure the strength of the evidence for a dissociation. This commentary is submitted to Meta-Psychology.


2018 ◽  
Vol 10 (11) ◽  
pp. 113 ◽  
Author(s):  
Yue Li ◽  
Xutao Wang ◽  
Pengjian Xu

Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1290 ◽  
Author(s):  
Rahman ◽  
Siddiqui

Abstractive text summarization that generates a summary by paraphrasing a long text remains an open significant problem for natural language processing. In this paper, we present an abstractive text summarization model, multi-layered attentional peephole convolutional LSTM (long short-term memory) (MAPCoL) that automatically generates a summary from a long text. We optimize parameters of MAPCoL using central composite design (CCD) in combination with the response surface methodology (RSM), which gives the highest accuracy in terms of summary generation. We record the accuracy of our model (MAPCoL) on a CNN/DailyMail dataset. We perform a comparative analysis of the accuracy of MAPCoL with that of the state-of-the-art models in different experimental settings. The MAPCoL also outperforms the traditional LSTM-based models in respect of semantic coherence in the output summary.


Sign in / Sign up

Export Citation Format

Share Document