memory for text
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 6)

H-INDEX

17
(FIVE YEARS 1)

2021 ◽  
Vol VI (II) ◽  
pp. 24-29
Author(s):  
Amjad Saleem ◽  
Muhammad Umer

A linguistic sign, according to Saussure (1966), is a combination of a signifier (form) and a signified (meaning). Form without meaning is just half of the sign. Although in some situations surface forms are excellently retained in memory over time, in most circumstances, explicit long term memory for the surface details or memory for forms of long-past linguistic events is poor or non-existent. Taylor (2012) and Port (2007), however, have proposed that there may be implicitly accumulated memory traces for all aspects of the language— nothing is thrown away. In the present study, 'form refers to physical properties or surface features such as the orthographic, phonological and acoustic representations of a text, while 'meaning' refers to semantic properties, including contextual and pragmatic information. There are some curiosities about their relationship, which this paper will tease apart. The curiosities relate to how language is processed, represented and retained in different circumstances.


2021 ◽  
Author(s):  
Arla Good

This study extends the popular notion that memory for text can be supported by song to foreign-language learning. Singing can be intrinsically motivating, attention focusing, and simply enjoyable for learners of all ages. The melodic and rhythmic context of song enhances recall of native text; however, there is limited evidence that these benefits extend to foreign text. In this study, Spanish-speaking Ecuadorian children learned a novel English passage for two weeks. Children in a


2021 ◽  
Author(s):  
Arla Good

This study extends the popular notion that memory for text can be supported by song to foreign-language learning. Singing can be intrinsically motivating, attention focusing, and simply enjoyable for learners of all ages. The melodic and rhythmic context of song enhances recall of native text; however, there is limited evidence that these benefits extend to foreign text. In this study, Spanish-speaking Ecuadorian children learned a novel English passage for two weeks. Children in a


Author(s):  
Ting Huang ◽  
Gehui Shen ◽  
Zhi-Hong Deng

Recurrent Neural Networks (RNNs) are widely used in the field of natural language processing (NLP), ranging from text categorization to question answering and machine translation. However, RNNs generally read the whole text from beginning to end or vice versa sometimes, which makes it inefficient to process long texts. When reading a long document for a categorization task, such as topic categorization, large quantities of words are irrelevant and can be skipped. To this end, we propose Leap-LSTM, an LSTM-enhanced model which dynamically leaps between words while reading texts. At each step, we utilize several feature encoders to extract messages from preceding texts, following texts and the current word, and then determine whether to skip the current word. We evaluate Leap-LSTM on several text categorization tasks: sentiment analysis, news categorization, ontology classification and topic classification, with five benchmark data sets. The experimental results show that our model reads faster and predicts better than standard LSTM. Compared to previous models which can also skip words, our model achieves better trade-offs between performance and efficiency.


2019 ◽  
Author(s):  
Mark Andrews

The study of memory for texts has had an long tradition of research in psychology. According to most general accounts, the recognition or recall of items in a text is based on querying a memory representation that is built up on the basis of background knowledge. The objective of this paper is to describe and thoroughly test a Bayesian model of these general accounts. In particular, we present a model that describes how we use our background knowledge to form memories in terms of Bayesian inference of statistical patterns in the text, followed by posterior predictive inference of the words that are typical of those inferred patterns. This provides us with precise predictions about which words will be remembered, whether veridically or erroneously, from any given text. We tested these predictions using behavioural data from a memory experiment using a large sample of randomly chosen texts from a representative corpus of British English. The results show that the probability of remembering any given word in the text, whether falsely or veridically, is well predicted by the Bayesian model. Moreover, compared to nontrivial alternative models of text memory, by every measure used in the analyses, the predictions of the Bayesian model were superior, often overwhelmingly so. We conclude that these results provide strong evidence in favour of the Bayesian account of text memory that we have presented in this paper.


Sign in / Sign up

Export Citation Format

Share Document