ACCELERATING TRAINING OF FEEDFORWARD NEURAL NETWORKS

1994 ◽  
Vol 03 (03) ◽  
pp. 339-348
Author(s):  
CARL G. LOONEY

We review methods and techniques for training feedforward neural networks that avoid problematic behavior, accelerate the convergence, and verify the training. Adaptive step gain, bipolar activation functions, and conjugate gradients are powerful stabilizers. Random search techniques circumvent the local minimum trap and avoid specialization due to overtraining. Testing assures quality learning.

2019 ◽  
Vol 50 (1) ◽  
pp. 121-147 ◽  
Author(s):  
Ezequiel López-Rubio ◽  
Francisco Ortega-Zamorano ◽  
Enrique Domínguez ◽  
José Muñoz-Pérez

Artificial Intelligence has been showing monumental growth in filling the gap between the capabilities of humans and machines. Researchers and scientists work on many aspects to make new things happen. Computer Vision is one of them. To make the system to visualize, neural networks are used. Some of the well-known Neural Networks include CNN, Feedforward Neural Networks (FNN), and Recurrent Neural Networks (RNN) and so on. Among them, CNN is the correct choice for computer vision because they learn relevant features from an image or video similar to the human brain. In this paper, the dataset used is CIFAR-10 (Canadian Institute for Advanced Research) which contains 60,000 images in the size of 32x32. Those images are divided into 10 different classes which contains both training and testing images. The training images are 50,000 and testing images are 10,000. The ten different classes contain airplanes, automobiles, birds, cat, ship, truck, deer, dog, frog and horse images. This paper was mainly concentrated on improving performance using normalization layers and comparing the accuracy achieved using different activation functions like ReLU and Tanh.


2007 ◽  
Vol 16 (01) ◽  
pp. 111-120 ◽  
Author(s):  
MANISH MANGAL ◽  
MANU PRATAP SINGH

This paper describes the application of two evolutionary algorithms to the feedforward neural networks used in classification problems. Besides of a simple backpropagation feedforward algorithm, the paper considers the genetic algorithm and random search algorithm. The objective is to analyze the performance of GAs over the simple backpropagation feedforward in terms of accuracy or speed in this problem. The experiments considered a feedforward neural network trained with genetic algorithm/random search algorithm and 39 types of network structures and artificial data sets. In most cases, the evolutionary feedforward neural networks seemed to have better of equal accuracy than the original backpropagation feedforward neural network. We found few differences in the accuracy of the networks solved by applying the EAs, but found ample differences in the execution time. The results suggest that the evolutionary feedforward neural network with random search algorithm might be the best algorithm on the data sets we tested.


Sign in / Sign up

Export Citation Format

Share Document