An introduction to the mathematical theory of neural networks

Author(s):  
S. Albeverio ◽  
B. Tirozzi
2000 ◽  
Author(s):  
Eduardo D. Sontag ◽  
Hector J. Sussman

1997 ◽  
Author(s):  
Eduardo Sontag ◽  
Hector Sussmann

1968 ◽  
Vol 2 (4) ◽  
pp. 395-397
Author(s):  
E.R. Caianiello

Acta Numerica ◽  
2021 ◽  
Vol 30 ◽  
pp. 203-248
Author(s):  
Mikhail Belkin

In the past decade the mathematical theory of machine learning has lagged far behind the triumphs of deep neural networks on practical challenges. However, the gap between theory and practice is gradually starting to close. In this paper I will attempt to assemble some pieces of the remarkable and still incomplete mathematical mosaic emerging from the efforts to understand the foundations of deep learning. The two key themes will be interpolation and its sibling over-parametrization. Interpolation corresponds to fitting data, even noisy data, exactly. Over-parametrization enables interpolation and provides flexibility to select a suitable interpolating model.As we will see, just as a physical prism separates colours mixed within a ray of light, the figurative prism of interpolation helps to disentangle generalization and optimization properties within the complex picture of modern machine learning. This article is written in the belief and hope that clearer understanding of these issues will bring us a step closer towards a general theory of deep learning and machine learning.


Sign in / Sign up

Export Citation Format

Share Document