universal approximator
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 6)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
pp. 427
Author(s):  
Sunghwan Moon

Deep neural networks have shown very successful performance in a wide range of tasks, but a theory of why they work so well is in the early stage. Recently, the expressive power of neural networks, important for understanding deep learning, has received considerable attention. Classic results, provided by Cybenko, Barron, etc., state that a network with a single hidden layer and suitable activation functions is a universal approximator. A few years ago, one started to study how width affects the expressiveness of neural networks, i.e., a universal approximation theorem for a deep neural network with a Rectified Linear Unit (ReLU) activation function and bounded width. Here, we show how any continuous function on a compact set of Rnin,nin∈N can be approximated by a ReLU network having hidden layers with at most nin+5 nodes in view of an approximate identity.


2019 ◽  
Vol 125 (3) ◽  
pp. 30004 ◽  
Author(s):  
E. Torrontegui ◽  
J. J. García-Ripoll

Sign in / Sign up

Export Citation Format

Share Document