scholarly journals Estimation of Approximating Rate for Neural Network inLwpSpaces

2012 ◽  
Vol 2012 ◽  
pp. 1-8
Author(s):  
Jian-Jun Wang ◽  
Chan-Yun Yang ◽  
Jia Jing

A class of Soblove type multivariate function is approximated by feedforward network with one hidden layer of sigmoidal units and a linear output. By adopting a set of orthogonal polynomial basis and under certain assumptions for the governing activation functions of the neural network, the upper bound on the degree of approximation can be obtained for the class of Soblove functions. The results obtained are helpful in understanding the approximation capability and topology construction of the sigmoidal neural networks.

2008 ◽  
Vol 20 (11) ◽  
pp. 2757-2791 ◽  
Author(s):  
Yoshifusa Ito

We have constructed one-hidden-layer neural networks capable of approximating polynomials and their derivatives simultaneously. Generally, optimizing neural network parameters to be trained at later steps of the BP training is more difficult than optimizing those to be trained at the first step. Taking into account this fact, we suppressed the number of parameters of the former type. We measure degree of approximation in both the uniform norm on compact sets and the Lp-norm on the whole space with respect to probability measures.


2014 ◽  
Vol 556-562 ◽  
pp. 6081-6084
Author(s):  
Qian Huang ◽  
Wen Long Li ◽  
Jian Kang ◽  
Jun Yang

In this paper, based on the study analyzed on the basis of a variety of neural networks, a kind of new type pulse neural network is implemented based on the FPGA [1]. The neural network adopts the Sigmoid function as its hidden layer nonlinear excitation function, at the same time, to reduce ROM table storage space and improve the efficiency of look-up table [2], it also adopts the STAM algorithm based nonlinear storage. Choose Altera Corporation’s EDA tools Quartus II as compilation, simulation platform, Cyclone II series EP2C20F484C6 devices and realized the pulse neural networks finally. In the last, we use XOR problem as example to carry out the hardware simulation, and simulation results are consistent with the theoretical value. Neural network to improve the complex, nonlinear, time-varying, uncertainty about the system reliability and security provides a new way.


2004 ◽  
Vol 4 (1) ◽  
pp. 143-146 ◽  
Author(s):  
D. J. Lary ◽  
M. D. Müller ◽  
H. Y. Mussa

Abstract. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4  (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.


2017 ◽  
Vol 26 (1) ◽  
pp. 103-113
Author(s):  
Eman Samir Bhaya ‎ ◽  
Zahraa Mahmoud Fadel

In different applications, we can widely use the neural network approximation. They are being applied to solve many problems in computer science, engineering, physics, etc. The reason for successful application of neural network approximation is the neural network ability to approximate arbitrary function. In the last 30 years, many papers have been published showing that we can approximate any continuous function defined on a compact subset of the Euclidean spaces of dimensions greater than 1, uniformly using a neural network with one hidden layer. Here we prove that any real function in L_P (C) defined on a compact and convex subset  of can be approximated by a sigmoidal neural network with one hidden layer, that we call nearly exponential approximation.


Acta Numerica ◽  
1994 ◽  
Vol 3 ◽  
pp. 145-202 ◽  
Author(s):  
S.W. Ellacott

This article starts with a brief introduction to neural networks for those unfamiliar with the basic concepts, together with a very brief overview of mathematical approaches to the subject. This is followed by a more detailed look at three areas of research which are of particular interest to numerical analysts.The first area is approximation theory. IfKis a compact set in ℝn, for somen, then it is proved that a semilinear feedforward network with one hidden layer can uniformly approximate any continuous function inC(K) to any required accuracy. A discussion of known results and open questions on the degree of approximation is included. We also consider the relevance of radial basis functions to neural networks.The second area considered is that of learning algorithms. A detailed analysis of one popular algorithm (the delta rule) will be given, indicating why one implementation leads to a stable numerical process, whereas an initially attractive variant (essentially a form of steepest descent) does not. Similar considerations apply to the backpropagation algorithm. The effect of filtering and other preprocessing of the input data will also be discussed systematically.Finally some applications of neural networks to numerical computation are considered.


2013 ◽  
Vol 371 ◽  
pp. 812-816 ◽  
Author(s):  
Daniel Constantin Anghel ◽  
Nadia Belu

The paper presents a method to use a feed forward neural network in order to rank a working place from the manufacture industry. Neural networks excel in gathering difficult non-linear relationships between the inputs and outputs of a system. The neural network is simulated with a simple simulator: SSNN. In this paper, we considered as relevant for a work place ranking, 6 input parameters: temperature, humidity, noise, luminosity, load and frequency. The neural network designed for the study presented in this paper has 6 input neurons, 13 neurons in the hidden layer and 1 neuron in the output layer. We present also some experimental results obtained through simulations.


2021 ◽  
pp. 385-399
Author(s):  
Wilson Guasti Junior ◽  
Isaac P. Santos

Abstract In this work we explore the use of deep learning models based on deep feedforward neural networks to solve ordinary and partial differential equations. The illustration of this methodology is given by solving a variety of initial and boundary value problems. The numerical results, obtained based on different feedforward neural networks structures, activation functions and minimization methods, were compared to each other and to the exact solutions. The neural network was implemented using the Python language, with the Tensorflow library.


2003 ◽  
Vol 3 (6) ◽  
pp. 5711-5724 ◽  
Author(s):  
D. J. Lary ◽  
M .D. Müller ◽  
H. Y. Mussa

Abstract. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural 5 network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co-efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the 10 dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4 (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download


2010 ◽  
Vol 44-47 ◽  
pp. 1402-1406
Author(s):  
Jian Jun Shi ◽  
La Wu Zhou ◽  
Ke Wen Kong ◽  
Yi Wang

. In the coal-rock interface recognition (CIR) technology, signal process and recognition are the key parts. A method for CIR based on BP neural networks and fuzzy technique was proposed in this paper. By using the trail-and-error, the hidden layer dimension of the network was decided. Also the network training and weight modification were studied. In order to get a higher identification ratio, fuzzy neural networks (FNN) based data fusion was studied. For CIR, the structure and algorithm of FNN were determined. The results indicated that the test data can be used to train and simulate with the neural network and FNN. And the proposed method can be used in CIR with a higher recognition ratio.


2014 ◽  
Vol 28 (19) ◽  
pp. 1450118 ◽  
Author(s):  
Huaguang Zhang ◽  
Yujiao Huang ◽  
Tiaoyang Cai ◽  
Zhanshan Wang

In this paper, multistability is discussed for delayed recurrent neural networks with ring structure and multi-step piecewise linear activation functions. Sufficient criteria are obtained to check the existence of multiple equilibria. A lemma is proposed to explore the number and the cross-direction of purely imaginary roots for the characteristic equation, which corresponds to the neural network model. Stability of all of equilibria is investigated. The work improves and extends the existing stability results in the literature. Finally, two examples are given to illustrate the effectiveness of the obtained results.


Sign in / Sign up

Export Citation Format

Share Document