scholarly journals Biologically Inspired Sleep Algorithm for Reducing Catastrophic Forgetting in Neural Networks

2020 ◽  
Vol 34 (10) ◽  
pp. 13933-13934
Author(s):  
Timothy Tadros ◽  
Giri Krishnan ◽  
Ramyaa Ramyaa ◽  
Maxim Bazhenov

Artificial neural networks (ANNs) are known to suffer from catastrophic forgetting: when learning multiple tasks, they perform well on the most recently learned task while failing to perform on previously learned tasks. In biological networks, sleep is known to play a role in memory consolidation and incremental learning. Motivated by the processes that are known to be involved in sleep generation in biological networks, we developed an algorithm that implements a sleep-like phase in ANNs. In an incremental learning framework, we demonstrate that sleep is able to recover older tasks that were otherwise forgotten. We show that sleep creates unique representations of each class of inputs and neurons that were relevant to previous tasks fire during sleep, simulating replay of previously learned memories.

2021 ◽  
Vol 247 ◽  
pp. 06029
Author(s):  
E. Szames ◽  
K. Ammar ◽  
D. Tomatis ◽  
J.M. Martinez

This work deals with the modeling of homogenized few-group cross sections by Artificial Neural Networks (ANN). A comprehensive sensitivity study on data normalization, network architectures and training hyper-parameters specifically for Deep and Shallow Feed Forward ANN is presented. The optimal models in terms of reduction in the library size and training time are compared to multi-linear interpolation on a Cartesian grid. The use case is provided by the OECD-NEA Burn-up Credit Criticality Benchmark [1]. The Pytorch [2] machine learning framework is used.


2021 ◽  
Author(s):  
Kyle Aitken ◽  
Marina Garrett ◽  
Shawn Olsen ◽  
Stefan Mihalas

Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggest that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift most often occurs along directions that have the most variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.


Author(s):  
Eduardo D. Martin ◽  
Alfonso Araque

Artificial neural networks are a neurobiologically inspired paradigm that emulates the functioning of the brain. They are based on neuronal function, because neurons are recognized as the cellular elements responsible for the brain information processing. However, recent studies have demonstrated that astrocytes can signal to other astrocytes and can communicate reciprocally with neurons, which suggests a more active role of astrocytes in the nervous system physiology and fundamental brain functions. This novel vision of the glial role on brain function calls for a reexamination of our current vision of artificial neural networks, which should be expanded to consider artificial neuroglial networks. The neuroglial network concept has not been yet applied to the computational and artificial intelligent sciences. However, the implementation of artificial neuroglial networks by incorporating glial cells as part of artificial neural networks may be as fruitful and successful for artificial networks as they have been for biological networks.


Author(s):  
Eduardo D. Martin ◽  
Alfonso Araque

Artificial neural networks are a neurobiologically inspired paradigm that emulates the functioning of the brain. They are based on neuronal function, because neurons are recognized as the cellular elements responsible for the brain information processing. However, recent studies have demonstrated that astrocytes can signal to other astrocytes and can communicate reciprocally with neurons, which suggests a more active role of astrocytes in the nervous system physiology and fundamental brain functions. This novel vision of the glial role on brain function calls for a reexamination of our current vision of artificial neural networks, which should be expanded to consider artificial neuroglial networks. The neuroglial network concept has not been yet applied to the computational and artificial intelligent sciences. However, the implementation of artificial neuroglial networks by incorporating glial cells as part of artificial neural networks may be as fruitful and successful for artificial networks as they have been for biological networks.


10.29007/8559 ◽  
2018 ◽  
Author(s):  
Mariela Andrade ◽  
Eduardo Gasca ◽  
Eréndira Rendón

Nowadays, the use of artificial neural networks (ANN), in particular the Multilayer Perceptron (MLP), is very popular for executing different tasks such as pattern recognition, data mining, and process automation. However, there are still weaknesses in these models when compared with human capabilities. A characteristic of human memory is the ability for learning new concepts without forgetting what we learned in the past, which has been a disadvantage in the field of artificial neural networks. How can we add new knowledge to the network without forgetting what has already been learned, without repeating the exhaustive ANN process? In an exhaustively training is used a complete training set, with all objects of all classes.In this work, we present a novel incremental learning algorithm for the MLP. New knowledge is incorporated into the target network without executing an exhaustive retraining. Objects of a new class integrate this knowledge, which was not included in the training of a source network. The algorithm consists in taking the final weights from the source network, doing a correction of these with the Support Vector Machine tools, and transferring the obtained weights to a target network. This last net is trained with a training set that it is previously preprocessed. The efficiency resulted of the target network is comparable with a net that is exhaustively trained.


Sign in / Sign up

Export Citation Format

Share Document