On Functional Approximation with Normalized Gaussian Units

1994 ◽  
Vol 6 (2) ◽  
pp. 319-333 ◽  
Author(s):  
Michel Benaim

Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recent results from Hirsch about cascades are used to show the asymptotic stability of the learning rule.

2017 ◽  
Author(s):  
Namig J. Guliyev ◽  
Vugar E. Ismailov

Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 9540-9557 ◽  
Author(s):  
Habtamu Zegeye Alemu ◽  
Junhong Zhao ◽  
Feng Li ◽  
Wei Wu

1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


Author(s):  
JUN WANG

Asymptotic properties of recurrent neural networks for optimization are analyzed. Specifically, asymptotic stability of recurrent neural networks with monotonically time-varying penalty parameters for optimization is proven; sufficient conditions of feasibility and optimality of solutions generated by the recurrent neural networks are characterized. Design methodology of the recurrent neural networks for solving optimization problems is discussed. Operating characteristics of the recurrent neural networks are also presented using illustrative examples.


Sign in / Sign up

Export Citation Format

Share Document