Memory States and Transitions between Them in Attractor Neural Networks

2017 ◽  
Vol 29 (10) ◽  
pp. 2684-2711 ◽  
Author(s):  
Stefano Recanatesi ◽  
Mikhail Katkov ◽  
Misha Tsodyks

Human memory is capable of retrieving similar memories to a just retrieved one. This associative ability is at the base of our everyday processing of information. Current models of memory have not been able to underpin the mechanism that the brain could use in order to actively exploit similarities between memories. The current idea is that to induce transitions in attractor neural networks, it is necessary to extinguish the current memory. We introduce a novel mechanism capable of inducing transitions between memories where similarities between memories are actively exploited by the neural dynamics to retrieve a new memory. Populations of neurons that are selective for multiple memories play a crucial role in this mechanism by becoming attractors on their own. The mechanism is based on the ability of the neural network to control the excitation-inhibition balance.

2021 ◽  
Author(s):  
Bhadra S Kumar ◽  
Nagavarshini Mayakkannan ◽  
N Sowmya Manojna ◽  
V. Srinivasa Chakravarthy

AbstractArtificial feedforward neural networks perform a wide variety of classification and function approximation tasks with high accuracy. Unlike their artificial counterparts, biological neural networks require a supply of adequate energy delivered to single neurons by a network of cerebral microvessels. Since energy is a limited resource, a natural question is whether the cerebrovascular network is capable of ensuring maximum performance of the neural network while consuming minimum energy? Should the cerebrovascular network also be trained, along with the neural network, to achieve such an optimum?In order to answer the above questions in a simplified modeling setting, we constructed an Artificial Neurovascular Network (ANVN) comprising a multilayered perceptron (MLP) connected to a vascular tree structure. The root node of the vascular tree structure is connected to an energy source, and the terminal nodes of the vascular tree supply energy to the hidden neurons of the MLP. The energy delivered by the terminal vascular nodes to the hidden neurons determines the biases of the hidden neurons. The “weights” on the branches of the vascular tree depict the energy distribution from the parent node to the child nodes. The vascular weights are updated by a kind of “backpropagation” of the energy demand error generated by the hidden neurons.We observed that higher performance was achieved at lower energy levels when the vascular network was also trained along with the neural network. This indicates that the vascular network needs to be trained to ensure efficient neural performance. We observed that below a certain network size, the energetic dynamics of the network in the per capita energy consumption vs. classification accuracy space approaches a fixed-point attractor for various initial conditions. Once the number of hidden neurons increases beyond a threshold, the fixed point appears to vanish, giving place to a line of attractors. The model also showed that when there is a limited resource, the energy consumption of neurons is strongly correlated to their individual contribution to the network’s performance.Author summaryThe limited availability of resources contributed to a significant role in shaping evolution. The brain is also no different. It is known to have tremendous computing power at a significantly lower cost than artificial computing systems. The artificial neural networks aim typically at minimizing output error and maximizing accuracy. A biological network like the brain has an added constraint of energy availability, which might force it to choose an optimal solution that provides the best possible accuracy while consuming minimum energy. The intricate vascular network which ensures adequate energy to the brain might be a systematically trained layout rather than a hard-wired anatomical structure. Through this work, we intend to explore how the artificial neural network would behave if it were made dependent on an energy supply network and how the training of the energy supply network would influence the performance of the neural network. Our model concluded that training of a vascular energy network is highly desirable, and when the size of the neural network is small, the energy consumed by each neuron is a direct readout on its contribution to the network performance.


2006 ◽  
Vol 18 (3) ◽  
pp. 614-633 ◽  
Author(s):  
J. M. Cortes ◽  
J. J. Torres ◽  
J. Marro ◽  
P. L. Garrido ◽  
H. J. Kappen

We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological findings that show that synaptic strength may either increase or decrease on a short timescale depending on presynaptic activity. We thus describe a mechanism by which fast presynaptic noise enhances the neural network sensitivity to an external stimulus. The reason is that, in general, presynaptic noise induces nonequilibrium behavior and, consequently, the space of fixed points is qualitatively modified in such a way that the system can easily escape from the attractor. As a result, the model shows, in addition to pattern recognition, class identification and categorization, which may be relevant to the understanding of some of the brain complex tasks.


MRS Bulletin ◽  
1988 ◽  
Vol 13 (8) ◽  
pp. 30-35 ◽  
Author(s):  
Dana Z. Anderson

From the time of their conception, holography and holograms have evolved as a metaphor for human memory. Holograms can be made so that the information they contain is distributed throughout the holographic medium—destroy part of the hologram and the stored information remains wholly intact, except for a loss of detail. In this property holograms evidently have something in common with human memory, which is to some extent resilient against physical damage to the brain. There is much more to the metaphor than simply that information is stored in a distributed manner.Research in the optics community is now looking to holography, in particular dynamic holography, not only for information storage, but for information processing as well. The ideas are based upon neural network models. Neural networks are models for processing that are inspired by the apparent architecture of the brain. This is a processing paradigm that is new to optics. From within this network paradigm we look to build machines that can store and recall information associatively, play back a chain of recorded events, undergo learning and possibly forgetting, make decisions, adapt to a particular environment, and self-organize to evolve some desirable behavior. We hope that neural network models will give rise to optical machines for memory, speech processing, visual processing, language acquisition, motor control, and so on.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

EEG analysis aims to help scientists better understand the brain, help physicians diagnose and treatment choices of the brain-computer interface. Artificial neural networks are among the most effective learning algorithms to perform computing tasks similar to biological neurons in the human brain. In some problems, the neural network model's performance might significantly degrade and overfit due to some irrelevant features that negatively influence the model performance. Swarm optimization algorithms are robust techniques that can be implemented to find optimal solutions to such problems. In this paper, Grey Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO) algorithms are applied for the feature selection and the training of a Feed-forward Neural Network (FFNN). The performance of the FFNN in terms of test accuracy, precision, recall, and F1_score is investigated. Furthermore, this research has implemented other five machine learning algorithms for the purpose of comparison. Experimental results prove that the neural network model outperforms all other algorithms via GWO.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Author(s):  
Saša Vasiljević ◽  
Jasna Glišović ◽  
Nadica Stojanović ◽  
Ivan Grujić

According to the World Health Organization, air pollution with PM10 and PM2.5 (PM-particulate matter) is a significant problem that can have serious consequences for human health. Vehicles, as one of the main sources of PM10 and PM2.5 emissions, pollute the air and the environment both by creating particles by burning fuel in the engine, and by wearing of various elements in some vehicle systems. In this paper, the authors conducted the prediction of the formation of PM10 and PM2.5 particles generated by the wear of the braking system using a neural network (Artificial Neural Networks (ANN)). In this case, the neural network model was created based on the generated particles that were measured experimentally, while the validity of the created neural network was checked by means of a comparative analysis of the experimentally measured amount of particles and the prediction results. The experimental results were obtained by testing on an inertial braking dynamometer, where braking was performed in several modes, that is under different braking parameters (simulated vehicle speed, brake system pressure, temperature, braking time, braking torque). During braking, the concentration of PM10 and PM2.5 particles was measured simultaneously. The total of 196 measurements were performed and these data were used for training, validation, and verification of the neural network. When it comes to simulation, a comparison of two types of neural networks was performed with one output and with two outputs. For each type, network training was conducted using three different algorithms of backpropagation methods. For each neural network, a comparison of the obtained experimental and simulation results was performed. More accurate prediction results were obtained by the single-output neural network for both particulate sizes, while the smallest error was found in the case of a trained neural network using the Levenberg-Marquardt backward propagation algorithm. The aim of creating such a prediction model is to prove that by using neural networks it is possible to predict the emission of particles generated by brake wear, which can be further used for modern traffic systems such as traffic control. In addition, this wear algorithm could be applied on other vehicle systems, such as a clutch or tires.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1526 ◽  
Author(s):  
Choongmin Kim ◽  
Jacob A. Abraham ◽  
Woochul Kang ◽  
Jaeyong Chung

Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.


1991 ◽  
Vol 45 (10) ◽  
pp. 1706-1716 ◽  
Author(s):  
Mark Glick ◽  
Gary M. Hieftje

Artificial neural networks were constructed for the classification of metal alloys based on their elemental constituents. Glow discharge-atomic emission spectra obtained with a photodiode array spectrometer were used in multivariate calibrations for 7 elements in 37 Ni-based alloys (different types) and 15 Fe-based alloys. Subsets of the two major classes formed calibration sets for stepwise multiple linear regression. The remaining samples were used to validate the calibration models. Reference data from the calibration sets were then pooled into a single set to train neural networks with different architectures and different training parameters. After the neural networks learned to discriminate correctly among alloy classes in the training set, their ability to classify samples in the testing set was measured. In general, the neural network approach performed slightly better than the K-nearest neighbor method, but it suffered from a hidden classification mechanism and nonunique solutions. The neural network methodology is discussed and compared with conventional sample-classification techniques, and multivariate calibration of glow discharge spectra is compared with conventional univariate calibration.


2016 ◽  
Vol 38 (2) ◽  
pp. 37-46 ◽  
Author(s):  
Mateusz Kaczmarek ◽  
Agnieszka Szymańska

Abstract Nonlinear structural mechanics should be taken into account in the practical design of reinforced concrete structures. Cracking is one of the major sources of nonlinearity. Description of deflection of reinforced concrete elements is a computational problem, mainly because of the difficulties in modelling the nonlinear stress-strain relationship of concrete and steel. In design practise, in accordance with technical rules (e.g., Eurocode 2), a simplified approach for reinforced concrete is used, but the results of simplified calculations differ from the results of experimental studies. Artificial neural network is a versatile modelling tool capable of making predictions of values that are difficult to obtain in numerical analysis. This paper describes the creation and operation of a neural network for making predictions of deflections of reinforced concrete beams at different load levels. In order to obtain a database of results, that is necessary for training and testing the neural network, a research on measurement of deflections in reinforced concrete beams was conducted by the authors in the Certified Research Laboratory of the Building Engineering Institute at Wrocław University of Science and Technology. The use of artificial neural networks is an innovation and an alternative to traditional methods of solving the problem of calculating the deflections of reinforced concrete elements. The results show the effectiveness of using artificial neural network for predicting the deflection of reinforced concrete beams, compared with the results of calculations conducted in accordance with Eurocode 2. The neural network model presented in this paper can acquire new data and be used for further analysis, with availability of more research results.


Sign in / Sign up

Export Citation Format

Share Document