Learning dynamics in ‘visible’ neural networks

Author(s):  
Pierre Peretto
2021 ◽  
Author(s):  
Emanuele La Malfa ◽  
Gabriele La Malfa ◽  
Giuseppe Nicosia ◽  
Vito Latora

2021 ◽  
Vol 64 (11) ◽  
Author(s):  
Wei Wu ◽  
Xiaoyuan Jing ◽  
Wencai Du ◽  
Guoliang Chen

2011 ◽  
Vol 88 (7) ◽  
pp. 1327-1346 ◽  
Author(s):  
Zhenkun Huang ◽  
Chunhua Feng ◽  
Sannay Mohamad ◽  
Jinglong Ye

2019 ◽  
Vol 116 (23) ◽  
pp. 11537-11546 ◽  
Author(s):  
Andrew M. Saxe ◽  
James L. McClelland ◽  
Surya Ganguli

An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: What are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.


2021 ◽  
pp. 1-50
Author(s):  
Arnaud Fanthomme ◽  
Rémi Monasson

We study the learning dynamics and the representations emerging in recurrent neural networks (RNNs) trained to integrate one or multiple temporal signals. Combining analytical and numerical investigations, we characterize the conditions under which an RNN with [Formula: see text] neurons learns to integrate [Formula: see text] scalar signals of arbitrary duration. We show, for linear, ReLU, and sigmoidal neurons, that the internal state lives close to a [Formula: see text]-dimensional manifold, whose shape is related to the activation function. Each neuron therefore carries, to various degrees, information about the value of all integrals. We discuss the deep analogy between our results and the concept of mixed selectivity forged by computational neuroscientists to interpret cortical recordings.


Sign in / Sign up

Export Citation Format

Share Document