SYNCHRONOUS CHAOS IN HIGH-DIMENSIONAL MODULAR NEURAL NETWORKS

1996 ◽  
Vol 06 (11) ◽  
pp. 2055-2067 ◽  
Author(s):  
THOMAS WENNEKERS ◽  
FRANK PASEMANN

The relationship between certain types of high-dimensional neural networks and low-dimensional prototypical equations (neuromodules) is investigated. The high-dimensional systems consist of finitely many pools containing identical, dissipative and nonlinear single-units operating in discrete time. Under the assumption of random connections inside and between pools, the system can be reduced to a set of only a few equations, which — asymptotically in time and system size — describe the behavior of every single unit arbitrarily well. This result can be viewed as synchronization of the single units in each pool. It is stated as a theorem on systems of nonlinear coupled maps, which gives explicit conditions on the single unit dynamics and the nature of the random connections. As an application we compare a 2-pool network with the corresponding two-dimensional dynamics. The bifurcation diagrams of both systems become very similar even for moderate system size (N=50) and large disorder in the connection strengths (50% of mean), despite the fact, that the systems exhibit fairly complex behavior (quasiperiodicity, chaos, coexisting attractors).

2020 ◽  
pp. 105971232092291
Author(s):  
Guido Schillaci ◽  
Antonio Pico Villalpando ◽  
Verena V Hafner ◽  
Peter Hanappe ◽  
David Colliaux ◽  
...  

This work presents an architecture that generates curiosity-driven goal-directed exploration behaviours for an image sensor of a microfarming robot. A combination of deep neural networks for offline unsupervised learning of low-dimensional features from images and of online learning of shallow neural networks representing the inverse and forward kinematics of the system have been used. The artificial curiosity system assigns interest values to a set of pre-defined goals and drives the exploration towards those that are expected to maximise the learning progress. We propose the integration of an episodic memory in intrinsic motivation systems to face catastrophic forgetting issues, typically experienced when performing online updates of artificial neural networks. Our results show that adopting an episodic memory system not only prevents the computational models from quickly forgetting knowledge that has been previously acquired but also provides new avenues for modulating the balance between plasticity and stability of the models.


Entropy ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. 727 ◽  
Author(s):  
Hlynur Jónsson ◽  
Giovanni Cherubini ◽  
Evangelos Eleftheriou

Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Previous analysis revealed that most of the training epochs are spent on compressing the input, in some networks where finiteness of the mutual information can be established. However, the estimation of mutual information is nontrivial for high-dimensional continuous random variables. Therefore, the computation of the mutual information for DNNs and its visualization on the information plane mostly focused on low-complexity fully connected networks. In fact, even the existence of the compression phase in complex DNNs has been questioned and viewed as an open problem. In this paper, we present the convergence of mutual information on the information plane for a high-dimensional VGG-16 Convolutional Neural Network (CNN) by resorting to Mutual Information Neural Estimation (MINE), thus confirming and extending the results obtained with low-dimensional fully connected networks. Furthermore, we demonstrate the benefits of regularizing a network, especially for a large number of training epochs, by adopting mutual information estimates as additional terms in the loss function characteristic of the network. Experimental results show that the regularization stabilizes the test accuracy and significantly reduces its variance.


2013 ◽  
Vol 25 (3) ◽  
pp. 626-649 ◽  
Author(s):  
David Sussillo ◽  
Omri Barak

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputs with complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three high-dimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.


2002 ◽  
Vol 14 (5) ◽  
pp. 1195-1232 ◽  
Author(s):  
Douglas L. T. Rohde

Multidimensional scaling (MDS) is the process of transforming a set of points in a high-dimensional space to a lower-dimensional one while preserving the relative distances between pairs of points. Although effective methods have been developed for solving a variety of MDS problems, they mainly depend on the vectors in the lower-dimensional space having real-valued components. For some applications, the training of neural networks in particular, it is preferable or necessary to obtain vectors in a discrete, binary space. Unfortunately, MDS into a low-dimensional discrete space appears to be a significantly harder problem than MDS into a continuous space. This article introduces and analyzes several methods for performing approximately optimized binary MDS.


2022 ◽  
Vol 41 (2) ◽  
pp. 1-15
Author(s):  
Chuankun Zheng ◽  
Ruzhang Zheng ◽  
Rui Wang ◽  
Shuang Zhao ◽  
Hujun Bao

In this article, we introduce a compact representation for measured BRDFs by leveraging Neural Processes (NPs). Unlike prior methods that express those BRDFs as discrete high-dimensional matrices or tensors, our technique considers measured BRDFs as continuous functions and works in corresponding function spaces . Specifically, provided the evaluations of a set of BRDFs, such as ones in MERL and EPFL datasets, our method learns a low-dimensional latent space as well as a few neural networks to encode and decode these measured BRDFs or new BRDFs into and from this space in a non-linear fashion. Leveraging this latent space and the flexibility offered by the NPs formulation, our encoded BRDFs are highly compact and offer a level of accuracy better than prior methods. We demonstrate the practical usefulness of our approach via two important applications, BRDF compression and editing. Additionally, we design two alternative post-trained decoders to, respectively, achieve better compression ratio for individual BRDFs and enable importance sampling of BRDFs.


1998 ◽  
Vol 10 (3) ◽  
pp. 651-669 ◽  
Author(s):  
Toru Aonishi ◽  
Koji Kurata

Dynamic link matching is a self-organizing topographic mapping between a template image and a data image. The mapping tends to be continuous, linking two points sharing similar local features, which, as a result, can lead to its deformation to some degree. In analyzing such deformation mathematically, we reduced the model equation to a phase equation, which enabled us to clarify the principles of the deformation process and the relationship between high-dimensional models and low-dimensional ones. We also elucidated the characteristics of the model in the context of the standard regularization theory.


2017 ◽  
Vol 68 (10) ◽  
pp. 2224-2227 ◽  
Author(s):  
Camelia Gavrila

The aim of this paper is to determine a mathematical model which establishes the relationship between ozone levels together with other meteorological data and air quality. The model is valid for any season and for any area and is based on real-time data measured in Bucharest and its surroundings. This study is based on research using artificial neural networks to model nonlinear relationships between the concentration of immission of ozone and the meteorological factors: relative humidity (RH), global solar radiation (SR), air temperature (TEMP). The ozone concentration depends on following primary pollutants: nitrogen oxides (NO, NO2), carbon monoxide (CO). To achieve this, the Levenberg-Marquardt algorithm was implemented in Scilab, a numerical computation software. Performed sensitivity tests proved the robustness of the model and its applicability in predicting the ozone on short-term.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


Sign in / Sign up

Export Citation Format

Share Document