CHAOTIC SYNCHRONIZATION USING A NETWORK OF NEURAL OSCILLATORS

2008 ◽  
Vol 18 (02) ◽  
pp. 157-164 ◽  
Author(s):  
V. SRINIVASA CHAKRAVARTHY ◽  
NEELIMA GUPTE ◽  
S. YOGESH ◽  
ATUL SALHOTRA

Synchronization of chaotic low-dimensional systems has been a topic of much recent research. Such systems have found applications for secure communications. In this work we show how synchronization can be achieved in a high-dimensional chaotic neural network. The network used in our studies is an extension of the Hopfield Network, known as the Complex Hopfield Network (CHN). The CHN, also an associative memory, has both fixed point and limit cycle or oscillatory behavior. In the oscillatory mode, the network wanders chaotically from one stored pattern to another. We show how a pair of identical high-dimensional CHNs can be synchronized by communicating only a subset of state vector components. The synchronizability of such a system is characterized through simulations.

Author(s):  
V. Srinivasa Chakravarthy

This chapter describes Complex Hopfield Neural Network (CHNN), a complex-variable version of the Hopfield neural network, which can exist in both fixed point and oscillatory modes. Memories can be stored by a complex version of Hebbs rule. In the fixed-point mode, CHNN is similar to a continuous-time Hopfield network. In the oscillatory mode, when multiple patterns are stored, the network wanders chaotically among patterns. Presence of chaos in this mode is verified by appropriate time series analysis. It is shown that adaptive connections can be used to control chaos and increase memory capacity. Electronic realization of the network in oscillatory dynamics, with fixed and adaptive connections shows an interesting tradeoff between energy expenditure and retrieval performance. It is shown how the intrinsic chaos in CHNN can be used as a mechanism for annealing when the network is used for solving quadratic optimization problems. The networks applicability to chaotic synchronization is described.


2020 ◽  
Author(s):  
Alexander Feigin ◽  
Aleksei Seleznev ◽  
Dmitry Mukhin ◽  
Andrey Gavrilov ◽  
Evgeny Loskutov

<p>We suggest a new method for construction of data-driven dynamical models from observed multidimensional time series. The method is based on a recurrent neural network (RNN) with specific structure, which allows for the joint reconstruction of both a low-dimensional embedding for dynamical components in the data and an operator describing the low-dimensional evolution of the system. The key link of the method is a Bayesian optimization of both model structure and the hypothesis about the data generating law, which is needed for constructing the cost function for model learning.  The form of the model we propose allows us to construct a stochastic dynamical system of moderate dimension that copies dynamical properties of the original high-dimensional system. An advantage of the proposed method is the data-adaptive properties of the RNN model: it is based on the adjustable nonlinear elements and has easily scalable structure. The combination of the RNN with the Bayesian optimization procedure efficiently provides the model with statistically significant nonlinearity and dimension.<br>The method developed for the model optimization aims to detect the long-term connections between system’s states – the memory of the system: the cost-function used for model learning is constructed taking into account this factor. In particular, in the case of absence of interaction between the dynamical component and noise, the method provides unbiased reconstruction of the hidden deterministic system. In the opposite case when the noise has strong impact on the dynamics, the method yield a model in the form of a nonlinear stochastic map determining the Markovian process with memory. Bayesian approach used for selecting both the optimal model’s structure and the appropriate cost function allows to obtain the statistically significant inferences about the dynamical signal in data as well as its interaction with the noise components.<br>Data driven model derived from the relatively short time series of the QG3 model – the high dimensional nonlinear system producing chaotic behavior – is shown be able to serve as a good simulator for the QG3 LFV components. The statistically significant recurrent states of the QG3 model, i.e. the well-known teleconnections in NH, are all reproduced by the model obtained. Moreover, statistics of the residence times of the model near these states is very close to the corresponding statistics of the original QG3 model. These results demonstrate that the method can be useful in modeling the variability of the real atmosphere.</p><p>The work was supported by the Russian Science Foundation (Grant No. 19-42-04121).</p>


Author(s):  
Felix Jimenez ◽  
Amanda Koepke ◽  
Mary Gregg ◽  
Michael Frey

A generative adversarial network (GAN) is an artificial neural network with a distinctive training architecture, designed to createexamples that faithfully reproduce a target distribution. GANs have recently had particular success in applications involvinghigh-dimensional distributions in areas such as image processing. Little work has been reported for low dimensions, where properties of GANs may be better identified and understood. We studied GAN performance in simulated low-dimensional settings, allowing us totransparently assess effects of target distribution complexity and training data sample size on GAN performance in a simpleexperiment. This experiment revealed two important forms of GAN error, tail underfilling and bridge bias, where the latter is analogousto the tunneling observed in high-dimensional GANs.


2021 ◽  
Author(s):  
Taylor W Webb ◽  
Kiyofumi Miyoshi ◽  
Tsz Yan So ◽  
Sivananda Rajananda ◽  
Hakwan Lau

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a principled explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.


2002 ◽  
Vol 12 (05) ◽  
pp. 1129-1139 ◽  
Author(s):  
WEI LIN ◽  
JIONG RUAN ◽  
WEIRUI ZHAO

We investigate the differences among several definitions of the snap-back-repeller, which is always regarded as an inducement to produce chaos in nonlinear dynamical system. By analyzing the norms in different senses and the illustrative examples, we clarify why a snap-back-repeller in the neighborhood of the fixed point, where all eigenvalues of the corresponding variable Jacobian Matrix are absolutely larger than 1 in norm, might not imply chaos. Furthermore, we theoretically prove the existence of chaos in a discrete neural networks model in the sense of Marotto with some parameters of the systems entering some regions. And the following numerical simulations and corresponding calculation, as concrete examples, reinforce our theoretical proof.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Paul T. Pearson

This paper develops a process whereby a high-dimensional clustering problem is solved using a neural network and a low-dimensional cluster diagram of the results is produced using the Mapper method from topological data analysis. The low-dimensional cluster diagram makes the neural network's solution to the high-dimensional clustering problem easy to visualize, interpret, and understand. As a case study, a clustering problem from a diabetes study is solved using a neural network. The clusters in this neural network are visualized using the Mapper method during several stages of the iterative process used to construct the neural network. The neural network and Mapper clustering diagram results for the diabetes study are validated by comparison to principal component analysis.


2006 ◽  
Vol 16 (08) ◽  
pp. 2425-2434 ◽  
Author(s):  
XU LI ◽  
GUANG LI ◽  
LE WANG ◽  
WALTER J. FREEMAN

This paper presents a simulation of a biological olfactory neural system with a KIII set, which is a high-dimensional chaotic neural network. The KIII set differs from conventional artificial neural networks by use of chaotic attractors for memory locations that are accessed by, chaotic trajectories. It was designed to simulate the patterns of action potentials and EEG waveforms observed in electrophysiological experiments, and has proved its utility as a model for biological intelligence in pattern classification. An application to recognition of handwritten numerals is presented here, in which the classification performance of the KIII network under different noise levels was investigated.


2021 ◽  
Vol 33 (3) ◽  
pp. 827-852
Author(s):  
Omri Barak ◽  
Sandro Romani

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.


ROBOT ◽  
2010 ◽  
Vol 32 (4) ◽  
pp. 478-483 ◽  
Author(s):  
Xiuhua NI ◽  
Weishan CHEN ◽  
Junkao LIU ◽  
Shengjun SHI

Sign in / Sign up

Export Citation Format

Share Document