Universal Approximators

Author(s):  
Ovidiu Calin
2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Ehsan Ardjmand ◽  
David F. Millie ◽  
Iman Ghalehkhondabi ◽  
William A. Young II ◽  
Gary R. Weckman

Artificial neural networks (ANNs) are powerful empirical approaches used to model databases with a high degree of accuracy. Despite their recognition as universal approximators, many practitioners are skeptical about adopting their routine usage due to lack of model transparency. To improve the clarity of model prediction and correct the apparent lack of comprehension, researchers have utilized a variety of methodologies to extract the underlying variable relationships within ANNs, such as sensitivity analysis (SA). The theoretical basis of local SA (that predictors are independent and inputs other than variable of interest remain “fixed” at predefined values) is challenged in global SA, where, in addition to altering the attribute of interest, the remaining predictors are varied concurrently across their respective ranges. Here, a regression-based global methodology, state-based sensitivity analysis (SBSA), is proposed for measuring the importance of predictor variables upon a modeled response within ANNs. SBSA was applied to network models of a synthetic database having a defined structure and exhibiting multicollinearity. SBSA achieved the most accurate portrayal of predictor-response relationships (compared to local SA and Connected Weights Analysis), closely approximating the actual variability of the modeled system. From this, it is anticipated that skepticisms concerning the delineation of predictor influences and their uncertainty domains upon a modeled output within ANNs will be curtailed.


2020 ◽  
Vol 32 (11) ◽  
pp. 2249-2278
Author(s):  
Changcun Huang

This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function [Formula: see text], we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to [Formula: see text]. For multivariate function [Formula: see text], ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating [Formula: see text]. A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.


2009 ◽  
Vol 05 (01) ◽  
pp. 265-286
Author(s):  
MUSTAFA C. OZTURK ◽  
JOSE C. PRINCIPE

Walter Freeman in his classic 1975 book "Mass Activation of the Nervous System" presented a hierarchy of dynamical computational models based on studies and measurements done in real brains, which has been known as the Freeman's K model (FKM). Much more recently, liquid state machine (LSM) and echo state network (ESN) have been proposed as universal approximators in the class of functionals with exponential decaying memory. In this paper, we briefly review these models and show that the restricted K set architecture of KI and KII networks share the same properties of LSM/ESNs and is therefore one more member of the reservoir computing family. In the reservoir computing perspective, the states of the FKM are a representation space that stores in its spatio-temporal dynamics a short-term history of the input patterns. Then at any time, with a simple instantaneous read-out made up of a KI, information related to the input history can be accessed and read out. This work provides two important contributions. First, it emphasizes the need for optimal readouts, and shows how to adaptively design them. Second, it shows that the Freeman model is able to process continuous signals with temporal structure. We will provide theoretical results for the conditions on the system parameters of FKM satisfying the echo state property. Experimental results are presented to illustrate the validity of the proposed approach.


1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


2010 ◽  
Vol 22 (8) ◽  
pp. 2192-2207 ◽  
Author(s):  
Nicolas Le Roux ◽  
Yoshua Bengio

Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh ( 2006 ), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio ( 2008 ) and Sutskever and Hinton ( 2008 ), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression.


2000 ◽  
Vol 13 (6) ◽  
pp. 561-563 ◽  
Author(s):  
J.L. Castro ◽  
C.J. Mantas ◽  
J.M. Benı́tez

Sign in / Sign up

Export Citation Format

Share Document