Hybrid tuning of activation functions in feedforward neural networks

Author(s):  
L.N. De Castro ◽  
L.A. Ramirez ◽  
F. Gomide ◽  
F.J. Von Zuben
1994 ◽  
Vol 03 (03) ◽  
pp. 339-348
Author(s):  
CARL G. LOONEY

We review methods and techniques for training feedforward neural networks that avoid problematic behavior, accelerate the convergence, and verify the training. Adaptive step gain, bipolar activation functions, and conjugate gradients are powerful stabilizers. Random search techniques circumvent the local minimum trap and avoid specialization due to overtraining. Testing assures quality learning.


2019 ◽  
Vol 50 (1) ◽  
pp. 121-147 ◽  
Author(s):  
Ezequiel López-Rubio ◽  
Francisco Ortega-Zamorano ◽  
Enrique Domínguez ◽  
José Muñoz-Pérez

Artificial Intelligence has been showing monumental growth in filling the gap between the capabilities of humans and machines. Researchers and scientists work on many aspects to make new things happen. Computer Vision is one of them. To make the system to visualize, neural networks are used. Some of the well-known Neural Networks include CNN, Feedforward Neural Networks (FNN), and Recurrent Neural Networks (RNN) and so on. Among them, CNN is the correct choice for computer vision because they learn relevant features from an image or video similar to the human brain. In this paper, the dataset used is CIFAR-10 (Canadian Institute for Advanced Research) which contains 60,000 images in the size of 32x32. Those images are divided into 10 different classes which contains both training and testing images. The training images are 50,000 and testing images are 10,000. The ten different classes contain airplanes, automobiles, birds, cat, ship, truck, deer, dog, frog and horse images. This paper was mainly concentrated on improving performance using normalization layers and comparing the accuracy achieved using different activation functions like ReLU and Tanh.


Sign in / Sign up

Export Citation Format

Share Document