On the Synthesis of Brain-State-in-a-Box Neural Models with Application to Associative Memory

2000 ◽  
Vol 12 (2) ◽  
pp. 451-472 ◽  
Author(s):  
Fation Sevrani ◽  
Kennichi Abe

In this article we present techniques for designing associative memories to be implemented by a class of synchronous discrete-time neural networks based on a generalization of the brain-state-in-a-box neural model. First, we address the local qualitative properties and global qualitative aspects of the class of neural networks considered. Our approach to the stability analysis of the equilibrium points of the network gives insight into the extent of the domain of attraction for the patterns to be stored as asymptotically stable equilibrium points and is useful in the analysis of the retrieval performance of the network and also for design purposes. By making use of the analysis results as constraints, the design for associative memory is performed by solving a constraint optimization problem whereby each of the stored patterns is guaranteed a substantial domain of attraction. The performance of the designed network is illustrated by means of three specific examples.

2006 ◽  
Vol 6 ◽  
pp. 992-997 ◽  
Author(s):  
Alison M. Kerr

More than 20 years of clinical and research experience with affected people in the British Isles has provided insight into particular challenges for therapists, educators, or parents wishing to facilitate learning and to support the development of skills in people with Rett syndrome. This paper considers the challenges in two groups: those due to constraints imposed by the disabilities associated with the disorder and those stemming from the opportunities, often masked by the disorder, allowing the development of skills that depend on less-affected areas of the brain. Because the disorder interferes with the synaptic links between neurones, the functions of the brain that are most dependent on complex neural networks are the most profoundly affected. These functions include speech, memory, learning, generation of ideas, and the planning of fine movements, especially those of the hands. In contrast, spontaneous emotional and hormonal responses appear relatively intact. Whereas failure to appreciate the physical limitations of the disease leads to frustration for therapist and client alike, a clear understanding of the better-preserved areas of competence offers avenues for real progress in learning, the building of satisfying relationships, and achievement of a quality of life.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.


2018 ◽  
Vol 32 (18) ◽  
pp. 1850207 ◽  
Author(s):  
Weiping Wang ◽  
Xin Yu ◽  
Xiong Luo ◽  
Lixiang Li

Traditional biological neural networks lack the capability of reflecting variable synaptic weights when simulating associative memory of human brains. In this paper, we investigate the existence and exponential stability of a novel memristive multidirectional associative memory neural networks (MAMNNs) model, which includes the time-varying delays. In the proposed approach, the time-varying delays are set to be bounded, and it is not necessary for their derivative to be differentiable. With removal of certain conditions, less conservative results are generated. Sufficient criteria guaranteeing the stability of the memristive MAMNNs are derived based on the Lyapunov function and some inequality techniques. To illustrate the performance of the proposed criteria, a procedure is designed to realize information storage. Meanwhile, the effectiveness of the proposed theories is validated with numerical experiments.


1994 ◽  
Vol 05 (03) ◽  
pp. 165-180 ◽  
Author(s):  
SUBRAMANIA I. SUDHARSANAN ◽  
MALUR K. SUNDARESHAN

Complexity of implementation has been a major difficulty in the development of gradient descent learning algorithms for dynamical neural networks with feedback and recurrent connections. Some insights from the stability properties of the equilibrium points of the network, which suggest an appropriate tailoring of the sigmoidal nonlinear functions, can however be utilized in obtaining simplified learning rules, as demonstrated in this paper. An analytical proof of convergence of the learning scheme under specific conditions is given and some upper bounds on the adaptation parameters for an efficient implementation of the training procedure are developed. The performance features of the learning algorithm are illustrated by applying it to two problems of importance, viz., design of associative memories and nonlinear input-output mapping. For the first application, a systematic procedure is given for training a network to store multiple memory vectors as its stable equilibrium points, whereas for the second application, specific training rules are developed for a three-layer network architecture comprising a dynamical hidden layer for the identification of nonlinear input-output maps. A comparison with the performance of a standard backpropagation network provides an illustration of the capabilities of the present network architecture and the learning algorithm.


1988 ◽  
Vol 1 (4) ◽  
pp. 323-324 ◽  
Author(s):  
Harvey J. Greenberg
Keyword(s):  

2020 ◽  
Author(s):  
A. Grigis ◽  
J. Tasserie ◽  
V. Frouin ◽  
B. Jarraya ◽  
L. Uhrig

AbstractDecoding the levels of consciousness from cortical activity recording is a major challenge in neuroscience. Using clustering algorithms, we previously demonstrated that resting-state functional MRI (rsfMRI) data can be split into several clusters also called “brain states” corresponding to “functional configurations” of the brain. Here, we propose to use a supervised machine learning method based on artificial neural networks to predict functional brain states across levels of consciousness from rsfMRI. Because it is key to consider the topology of brain regions used to build the dynamical functional connectivity matrices describing the brain state at a given time, we applied BrainNetCNN, a graph-convolutional neural network (CNN), to predict the brain states in awake and anesthetized non-human primate rsfMRI data. BrainNetCNN achieved a high prediction accuracy that lies in [0.674, 0.765] depending on the experimental settings. We propose to derive the set of connections found to be important for predicting a brain state, reflecting the level of consciousness. The results demonstrate that deep learning methods can be used not only to predict brain states but also to provide additional insight on cortical signatures of consciousness with potential clinical consequences for the monitoring of anesthesia and the diagnosis of disorders of consciousness.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


1999 ◽  
Vol 09 (08) ◽  
pp. 1597-1617 ◽  
Author(s):  
HUBERT Y. CHAN ◽  
STANISLAW H. ŻAK

A chaotic neuron model with the linear saturating activation function is analyzed. The model accounts for the property of relative refractoriness, that is, gradual recovery of responsiveness of a biological neuron after a stimulus is applied to the neuron. A neural network model composed of chaotic neurons with the linear saturating activation functions, which includes the generalized Brain-State-in-a-Box (gBSB) model as a special case, is proposed and analyzed. The proposed model is then used to implement associative memory. The existence and stability of equilibrium points of the model are analyzed. Fuzzy logic is used to tune associative memory parameters for the purpose of directing the network trajectory to visit memory patterns with sought features. Simulation results are presented to illustrate the effectiveness of the memory retrieval capability.


2016 ◽  
pp. 614-633 ◽  
Author(s):  
Ahmed Mnasser ◽  
Faouzi Bouani ◽  
Mekki Ksouri

A model predictive control design for nonlinear systems based on artificial neural networks is discussed. The Feedforward neural networks are used to describe the unknown nonlinear dynamics of the real system. The backpropagation algorithm is used, offline, to train the neural networks model. The optimal control actions are computed by solving a nonconvex optimization problem with the gradient method. In gradient method, the steepest descent is a sensible factor for convergence. Then, an adaptive variable control rate based on Lyapunov function candidate and asymptotic convergence of the predictive controller are proposed. The stability of the closed loop system based on the neural model is proved. In order to demonstrate the robustness of the proposed predictive controller under set-point and load disturbance, a simulation example is considered. A comparison of the control performance achieved with a Levenberg-Marquardt method is also provided to illustrate the effectiveness of the proposed controller.


Sign in / Sign up

Export Citation Format

Share Document