The Effects of Adding Noise During Backpropagation Training on a Generalization Performance

1996 ◽  
Vol 8 (3) ◽  
pp. 643-674 ◽  
Author(s):  
Guozhong An

We study the effects of adding noise to the inputs, outputs, weight connections, and weight changes of multilayer feedforward neural networks during backpropagation training. We rigorously derive and analyze the objective functions that are minimized by the noise-affected training processes. We show that input noise and weight noise encourage the neural-network output to be a smooth function of the input or its weights, respectively. In the weak-noise limit, noise added to the output of the neural networks only changes the objective function by a constant. Hence, it cannot improve generalization. Input noise introduces penalty terms in the objective function that are related to, but distinct from, those found in the regularization approaches. Simulations have been performed on a regression and a classification problem to further substantiate our analysis. Input noise is found to be effective in improving the generalization performance for both problems. However, weight noise is found to be effective in improving the generalization performance only for the classification problem. Other forms of noise have practically no effect on generalization.

2020 ◽  
Vol 5 (9) ◽  
pp. 1124-1130
Author(s):  
Ledisi Giok Kabari ◽  
Young Claudius Mazi

Climate change generates so many direct and indirect effects on the environment.  Some of those effects have serious consequences. Rain-induced flooding is one of the direct effects of climate change and its impact on the environment is usually devastating and worrisome. Floods are one of the most commonly occurring disasters and have caused significant damage to life, including agriculture and economy. They are usually caused in areas where there is excessive downpour and poor drainage systems. The study uses Feedforward Multilayer Neural Network to perform short-term prediction of the amount of rainfall flood for the Niger Delta   sub region of Nigeria given previous rainfall data for a specified period of time. The data for training and testing of the Neural Network was sourced from Weather Underground official web site https://www.wunderground.com.  An iterative Methodology was used and implemented in MATLAB. We adopted multi-layer Feedforward Neural Networks. The study accurately predicts the rain-induced flood for the Niger Delta   sub region of Nigeria.


Author(s):  
Nikolay Anatolievich Vershkov ◽  
Mikhail Grigoryevich Babenko ◽  
Viktor Andreevich Kuchukov ◽  
Natalia Nikolaevna Kuchukova

The article deals with the problem of recognition of handwritten digits using feedforward neural networks (perceptrons) using a correlation indicator. The proposed method is based on the mathematical model of the neural network as an oscillatory system similar to the information transmission system. The article uses theoretical developments of the authors to search for the global extremum of the error function in artificial neural networks. The handwritten digit image is considered as a one-dimensional input discrete signal representing a combination of "perfect digit writing" and noise, which describes the deviation of the input implementation from "perfect writing". The ideal observer criterion (Kotelnikov criterion), which is widely used in information transmission systems and describes the probability of correct recognition of the input signal, is used to form the loss function. In the article is carried out a comparative analysis of the convergence of learning and experimentally obtained sequences on the basis of the correlation indicator and widely used in the tasks of classification of the function CrossEntropyLoss with the use of the optimizer and without it. Based on the experiments carried out, it is concluded that the proposed correlation indicator has an advantage of 2-3 times.


2020 ◽  
Vol 9 (2) ◽  
pp. 285
Author(s):  
Putu Wahyu Tirta Guna ◽  
Luh Arida Ayu Ayu Rahning Putri

Not many people know that endek cloth itself has 4 known variances. .Nowadays. Computing and classification algorithm can be implemented to solve classification problem with respect to the features data as input. We can use this computing power to digitalize these endek pattern. The features extraction algorithm used in this research is GLCM. Where these data will act as input for the neural network model later. There is a lot of optimizer algorithm to use in back propagation phase. In this research we  prefer to use adam which is one of the newest and most popular optimizer algorithm. To compare its performace we also use SGD which is older and popular optimizer algorithm. Later we find that adam algorithm generate 33% accuracy which is better than what SGD algorithm give, it is 23% accuracy. Longer epoch also give affect for overall model accuracy.


2000 ◽  
Vol 12 (4) ◽  
pp. 811-829 ◽  
Author(s):  
Eric Hartman

Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.


Author(s):  
Ngoc-Bach Hoang ◽  
Hee-Jun Kang

In this paper, we present a novel method for fault identification in the case of an incipient wheel fault in mobile robots. First, a three-layer neural networks is established to estimate the deviation of the robot dynamics due to the process fault. The estimate of the faulty dynamic model is based on a combination of the nominal dynamic model and the neural network output. Then, by replacing the faulty dynamic model with its estimate value, the primary estimates of the wheel radius appear as the solutions of two quadratic equations. Next, a simple and efficient way to perform these primary estimate selections is proposed in order to eliminate undesired primary estimates. A recursive nonlinear least squares is applied in order to obtain a smooth estimate of the wheel radius. Two computer simulation examples using Matlab/Simulink show that the proposed method is very effective for incipient fault identification in the setting of both left and right wheel faults.


2021 ◽  
pp. 385-399
Author(s):  
Wilson Guasti Junior ◽  
Isaac P. Santos

Abstract In this work we explore the use of deep learning models based on deep feedforward neural networks to solve ordinary and partial differential equations. The illustration of this methodology is given by solving a variety of initial and boundary value problems. The numerical results, obtained based on different feedforward neural networks structures, activation functions and minimization methods, were compared to each other and to the exact solutions. The neural network was implemented using the Python language, with the Tensorflow library.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 169
Author(s):  
Eduardo Paluzo-Hidalgo ◽  
Rocio Gonzalez-Diaz ◽  
Miguel A. Gutiérrez-Naranjo ◽  
Jónathan Heras

Broadly speaking, an adversarial example against a classification model occurs when a small perturbation on an input data point produces a change on the output label assigned by the model. Such adversarial examples represent a weakness for the safety of neural network applications, and many different solutions have been proposed for minimizing their effects. In this paper, we propose a new approach by means of a family of neural networks called simplicial-map neural networks constructed from an Algebraic Topology perspective. Our proposal is based on three main ideas. Firstly, given a classification problem, both the input dataset and its set of one-hot labels will be endowed with simplicial complex structures, and a simplicial map between such complexes will be defined. Secondly, a neural network characterizing the classification problem will be built from such a simplicial map. Finally, by considering barycentric subdivisions of the simplicial complexes, a decision boundary will be computed to make the neural network robust to adversarial attacks of a given size.


2011 ◽  
Vol 2011 ◽  
pp. 1-8
Author(s):  
Minoru Sasaki ◽  
Takuya Murase ◽  
Yoshihiro Inoue ◽  
Nobuharu Ukita

This paper presents identification and control of a 10-m antenna via accelerometers and angle encoder data. Artificial neural networks can be used effectively for the identification and control of nonlinear dynamical system such as a large flexible antenna with a friction drive system. Some identification results are shown and compared with the results of conventional prediction error method. And we use a neural network inverse model to control the large flexible antenna. In the neural network inverse model, a neural network is trained, using supervised learning, to develop an inverse model of the antenna. The network input is the process output, and the network output is the corresponding process input. The control results show the validation of the ANN approach for identification and control of the 10-m flexible antenna.


Author(s):  
Zhun Yang ◽  
Adam Ishay ◽  
Joohyung Lee

We present NeurASP, a simple extension of answer set programs by embracing neural networks. By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. We demonstrate how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network's perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules.


Sign in / Sign up

Export Citation Format

Share Document