scholarly journals Rate-space attractors and low dimensional dynamics interact with spike-synchrony statistics in neural networks

2019 ◽  
Author(s):  
Daniel Scott ◽  
Michael Frank
2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Hamidreza Abbaspourazad ◽  
Mahdi Choudhury ◽  
Yan T. Wong ◽  
Bijan Pesaran ◽  
Maryam M. Shanechi

AbstractMotor function depends on neural dynamics spanning multiple spatiotemporal scales of population activity, from spiking of neurons to larger-scale local field potentials (LFP). How multiple scales of low-dimensional population dynamics are related in control of movements remains unknown. Multiscale neural dynamics are especially important to study in naturalistic reach-and-grasp movements, which are relatively under-explored. We learn novel multiscale dynamical models for spike-LFP network activity in monkeys performing naturalistic reach-and-grasps. We show low-dimensional dynamics of spiking and LFP activity exhibited several principal modes, each with a unique decay-frequency characteristic. One principal mode dominantly predicted movements. Despite distinct principal modes existing at the two scales, this predictive mode was multiscale and shared between scales, and was shared across sessions and monkeys, yet did not simply replicate behavioral modes. Further, this multiscale mode’s decay-frequency explained behavior. We propose that multiscale, low-dimensional motor cortical state dynamics reflect the neural control of naturalistic reach-and-grasp behaviors.


2006 ◽  
Vol 16 (09) ◽  
pp. 2729-2736 ◽  
Author(s):  
XIAO-SONG YANG ◽  
YAN HUANG

This paper presents a new class of chaotic and hyperchaotic low dimensional cellular neural networks modeled by ordinary differential equations with some simple connection matrices. The chaoticity of these neural networks is indicated by positive Lyapunov exponents calculated by a computer.


2021 ◽  
Vol 11 (3) ◽  
pp. 1013
Author(s):  
Zvezdan Lončarević ◽  
Rok Pahič ◽  
Aleš Ude ◽  
Andrej Gams

Autonomous robot learning in unstructured environments often faces the problem that the dimensionality of the search space is too large for practical applications. Dimensionality reduction techniques have been developed to address this problem and describe motor skills in low-dimensional latent spaces. Most of these techniques require the availability of a sufficiently large database of example task executions to compute the latent space. However, the generation of many example task executions on a real robot is tedious, and prone to errors and equipment failures. The main result of this paper is a new approach for efficient database gathering by performing a small number of task executions with a real robot and applying statistical generalization, e.g., Gaussian process regression, to generate more data. We have shown in our experiments that the data generated this way can be used for dimensionality reduction with autoencoder neural networks. The resulting latent spaces can be exploited to implement robot learning more efficiently. The proposed approach has been evaluated on the problem of robotic throwing at a target. Simulation and real-world results with a humanoid robot TALOS are provided. They confirm the effectiveness of generalization-based database acquisition and the efficiency of learning in a low-dimensional latent space.


2001 ◽  
Vol 12 (5) ◽  
pp. 859-864
Author(s):  
V.K. Jain ◽  
A.K. Srivastava ◽  
Anup Das ◽  
Vikas Rai

2001 ◽  
Vol 435 ◽  
pp. 81-91 ◽  
Author(s):  
JAVIER JIMÉNEZ ◽  
MARK P. SIMENS

The low-dimensional dynamics of the structures in a turbulent wall flow are studied by means of numerical simulations. These are made both ‘minimal’, in the sense that they contain a single copy of each relevant structure, and ‘autonomous’ in the sense that there is no outer turbulent flow with which they can interact. The interaction is prevented by a numerical mask that damps the flow above a given wall distance, and the flow behaviour is studied as a function of the mask height. The simplest case found is a streamwise wave that propagates without change. It takes the form of a single wavy low-velocity streak flanked by two counter-rotating staggered quasi-streamwise vortices, and is found when the height of the numerical masking function is less than δ+1 ≈ 50. As the mask height is increased, this solution bifurcates into an almost-perfect limit cycle, a two-frequency torus, weak chaos, and full-edged bursting turbulence. The transition is essentially complete when δ+1 ≈ 70, even if the wall-parallel dimensions of the computational box are small enough for bursting turbulence to be metastable, lasting only for a few bursting cycles. Similar low-dimensional dynamics are found in somewhat larger boxes, containing two copies of the basic structures, in which the bursting turbulence is self-sustaining.


2021 ◽  
Author(s):  
Rogini Runghen ◽  
Daniel B Stouffer ◽  
Giulio Valentino Dalla Riva

Collecting network interaction data is difficult. Non-exhaustive sampling and complex hidden processes often result in an incomplete data set. Thus, identifying potentially present but unobserved interactions is crucial both in understanding the structure of large scale data, and in predicting how previously unseen elements will interact. Recent studies in network analysis have shown that accounting for metadata (such as node attributes) can improve both our understanding of how nodes interact with one another, and the accuracy of link prediction. However, the dimension of the object we need to learn to predict interactions in a network grows quickly with the number of nodes. Therefore, it becomes computationally and conceptually challenging for large networks. Here, we present a new predictive procedure combining a graph embedding method with machine learning techniques to predict interactions on the base of nodes' metadata. Graph embedding methods project the nodes of a network onto a---low dimensional---latent feature space. The position of the nodes in the latent feature space can then be used to predict interactions between nodes. Learning a mapping of the nodes' metadata to their position in a latent feature space corresponds to a classic---and low dimensional---machine learning problem. In our current study we used the Random Dot Product Graph model to estimate the embedding of an observed network, and we tested different neural networks architectures to predict the position of nodes in the latent feature space. Flexible machine learning techniques to map the nodes onto their latent positions allow to account for multivariate and possibly complex nodes' metadata. To illustrate the utility of the proposed procedure, we apply it to a large dataset of tourist visits to destinations across New Zealand. We found that our procedure accurately predicts interactions for both existing nodes and nodes newly added to the network, while being computationally feasible even for very large networks. Overall, our study highlights that by exploiting the properties of a well understood statistical model for complex networks and combining it with standard machine learning techniques, we can simplify the link prediction problem when incorporating multivariate node metadata. Our procedure can be immediately applied to different types of networks, and to a wide variety of data from different systems. As such, both from a network science and data science perspective, our work offers a flexible and generalisable procedure for link prediction.


2000 ◽  
Author(s):  
Taejun Choi ◽  
Yung C. Shin

Abstract A new method for on-line chatter detection is presented. The proposed method characterizes the significant transition from high dimensional to low dimensional dynamics in the cutting process at the onset of chatter. Based on the likeness of the cutting process to the nearly-1/f process, this wavelet-based maximum likelihood (ML) estimation algorithm is applied for on-line chatter detection. The presented chatter detection index γ is independent of the cutting conditions and gives excellent detection accuracy and permissible computational efficiency, which makes it suitable for on-line implementation. The validity of the proposed method is demonstrated through the tests with extensive actual data obtained from turning and milling processes.


Author(s):  
Stanislav Fort ◽  
Adam Scherlis

We explore the loss landscape of fully-connected and convolutional neural networks using random, low-dimensional hyperplanes and hyperspheres. Evaluating the Hessian, H, of the loss function on these hypersurfaces, we observe 1) an unusual excess of the number of positive eigenvalues of H, and 2) a large value of Tr(H)/||H|| at a well defined range of configuration space radii, corresponding to a thick, hollow, spherical shell we refer to as the Goldilocks zone. We observe this effect for fully-connected neural networks over a range of network widths and depths on MNIST and CIFAR-10 datasets with the ReLU and tanh non-linearities, and a similar effect for convolutional networks. Using our observations, we demonstrate a close connection between the Goldilocks zone, measures of local convexity/prevalence of positive curvature, and the suitability of a network initialization. We show that the high and stable accuracy reached when optimizing on random, low-dimensional hypersurfaces is directly related to the overlap between the hypersurface and the Goldilocks zone, and as a corollary demonstrate that the notion of intrinsic dimension is initialization-dependent. We note that common initialization techniques initialize neural networks in this particular region of unusually high convexity/prevalence of positive curvature, and offer a geometric intuition for their success. Furthermore, we demonstrate that initializing a neural network at a number of points and selecting for high measures of local convexity such as Tr(H)/||H||, number of positive eigenvalues of H, or low initial loss, leads to statistically significantly faster training on MNIST. Based on our observations, we hypothesize that the Goldilocks zone contains an unusually high density of suitable initialization configurations.


Sign in / Sign up

Export Citation Format

Share Document