How Does Acoustic Similarity Influence Short-Term Memory?

1968 ◽  
Vol 20 (3) ◽  
pp. 249-264 ◽  
Author(s):  
A. D. Baddeley

This study attempts to discover why items which are similar in sound are hard to recall in a short-term memory situation. The input, storage, and retrieval stages of the memory system are examined separately. Experiments I, II and III use a modification of the Peterson and Peterson technique to plot short-term forgetting curves for sequences of acoustically similar and control words. If acoustically similar sequences are stored less efficiently, they should be forgotten more rapidly. All three experiments show a parallel rate of forgetting for acoustically similar and control sequences, suggesting that the acoustic similarity effect does not occur during storage. Two input hypotheses are then examined, one involving a simple sensory trace, the other an overloading of a system which must both discriminate and memorize at the same time. Both predict that short-term memory for spoken word sequences should deteriorate when the level of background noise is increased. Subjects performed both a listening test and a memory test in which they attempted to recall sequences of five words. Noise impaired performance on the listening test but had no significant effect on retention, thus supporting neither of the input hypotheses. The final experiments studied two retrieval hypotheses. The first of these, Wickelgren's phonemic-associative hypotheses attributes the acoustic similarity effect to inter-item associations. It predicts that, when sequences comprising a mixture of similar and dissimilar items are recalled, errors should follow acoustically similar items. The second hypothesis attributes the effect to the overloading of retrieval cues which consequently do not discriminate adequately among available responses. It predicts maximum error rate on, not following, similar items. Two experiments were performed, one involving recall of visually presented letter sequences, the other of auditorily presented word sequences. Both showed a marked tendency for errors to coincide with acoustically similar items, as the second hypothesis would predict. It is suggested that the acoustic similarity effect occurs at retrieval and is due to the overloading of retrieval cues.

1966 ◽  
Vol 5 (6) ◽  
pp. 233-234 ◽  
Author(s):  
R. Conrad ◽  
A. D. Baddeley ◽  
A. J. Hull

1975 ◽  
Vol 27 (3) ◽  
pp. 343-356 ◽  
Author(s):  
David Legge ◽  
Zofia M. Kaminska

It is hypothesized that items are coded for short-term storage in the language of the modality through which the customary responses to these items are normally monitored. This Response Monitoring Modality Hypothesis may account for the acoustic similarity effect in short-term memory for verbal items. The hypothesis was tested using non-verbal material. After paired-associate training subjects were found to confuse, in retention tests, items sharing similar previously trained responses to a greater extent than items that were directly similar to one another. The experiment simulates under controlled conditions the natural phenomenon of acoustical confusions in short-term memory, and provides strong support for the Response Monitoring Modality Hypothesis.


1966 ◽  
Vol 18 (4) ◽  
pp. 362-365 ◽  
Author(s):  
A. D. Baddeley

Experiment I studied short-term memory (STM) for auditorily presented five word sequences as a function of acoustic and semantic similarity. There was a large adverse effect of acoustic similarity on STM (72·5 per cent.) which was significantly greater (p < 0·001) than the small (6·3 per cent.) but reliable effect (p < 0·05) of semantic similarity. Experiment II compared STM for sequences of words which had a similar letter structure (formal similarity) but were pronounced differently, with acoustically similar but formally dissimilar words and with control sequences. There was a significant effect of acoustic but not of formal similarity. Experiment III replicated the acoustic similarity effect found in Experiment I using visual instead of auditory presentation. Again a large and significant effect of acoustic similarity was shown.


1978 ◽  
Vol 30 (3) ◽  
pp. 487-494 ◽  
Author(s):  
Wai Fong Yik

Four lists of Chinese words in a 2 × 2 factorial design of visual and acoustic similarity were used in a short-term memory experiment. In addition to a strong acoustic similarity effect, a highly significant visual similarity effect was also obtained. This was particularly pronounced in the absence of acoustic similarity in the words used. The results not only confirm acoustic encoding to be a basic process in short-term recall of verbal stimuli in a language other than English but also lend support to the growing evidence of visual encoding in short-term memory as the situation demands.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3678
Author(s):  
Dongwon Lee ◽  
Minji Choi ◽  
Joohyun Lee

In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos.


Author(s):  
Satish Tirumalapudi

Abstract: Chat bots are software applications that help users to communicate with the machine and get the required result, this is where Natural Language Processing (NLP) comes into the picture. Natural language processing is based on deep learning that enables computers to acquire meaning from inputs given by the users. Natural language processing techniques can make possible the use of natural language to express ideas, thus drastically increasing accessibility. NLP engines rely on the elements of intent, utterance, entity, context, and session. Here in this project, we will be using Deep learning techniques which will be trained on the dataset which contains categories, patterns, and responses. Long Short-Term Memory (LSTM) is a Recurrent Neural Network that is capable of learning order dependence in sequence prediction problems. One of the most popular RNN approaches is LSTM to identify and control a dynamic system. We use an RNN to classify the category user’s message belongs to and then will give a response from the list of responses. Keywords: NLP – Natural Language Processing, LSTM – Long Short Term Memory, RNN – Recurrent Neural Networks.


Sign in / Sign up

Export Citation Format

Share Document