scholarly journals Pruned Cascade Neural Network Image Classification

2019 ◽  
Vol 8 (3) ◽  
pp. 6454-6457

In this paper we propose a new model of deep neural network to build in deeper network. The convoluational neural network is one of the leading Image classification problem. The vanishing gradient problem requires us to use small learning rate with gradient descent which needs many small steps to converge and its take long time to proceed . By using GPU we can process more than one dataset (CIFAR-100) in a particular session. To overcome vanishing gradient problem by using the prune cascade correlation neural network learning algorithm compared to the deep cascade learning in CNN architecture. We improve the filter size, to reduce to the problem by training algorithm that trains in the network from bottom to top approach and its performing attain the task for better image classification in Google Net. We reduce the time complexity (training time ), storage capacity can be used pre training algorithm.

2016 ◽  
Vol 12 (11) ◽  
pp. 4488-4499
Author(s):  
Manjula Devi ◽  
S.J. Suji Prasad ◽  
Sagana C

Among the existing NN architectures, Multilayer Feedforward Neural Network (MFNN) with single hidden layer architecture has been scrutinized thoroughly as best for solving nonlinear classification problem. The training time is consumed more for very huge training datasets in the MFNN training phase. In order to reduce the training time, a simple and fast training algorithm called Exponential Adaptive Skipping Training (EAST) Algorithm was presented that improves the training speed by significantly reducing the total number of training input samples consumed by MFNN for training at every single epoch. Although the training performance of EAST achieves faster, it still lacks in the accuracy rate due to high skipping factor. In order to improve the accuracy rate of the training algorithm, Hybrid system has been suggested in which the neural network is trained with the fuzzified data. In this paper, a z-Score Fuzzy Exponential Adaptive Skipping Training (z-FEAST) algorithm is proposed which is based on the fuzzification of EAST. The evaluation of the proposed z-FEAST algorithm is demonstrated effectively using the benchmark datasets - Iris, Waveform, Heart Disease and Breast Cancer for different learning rate. Simulation study proved that z-FEAST training algorithm improves the accuracy rate.


2000 ◽  
Author(s):  
Magdy Mohamed Abdelhameed ◽  
Sabri Cetinkunt

Abstract Cerebellar model articulation controller (CMAC) is a useful neural network learning technique. It was developed two decades ago but yet lacks an adequate learning algorithm, especially when it is used in a hybrid- type controller. This work is intended to introduce a simulation study for examining the performance of a hybrid-type control system based on the conventional learning algorithm of CMAC neural network. This study showed that the control system is unstable. Then a new adaptive learning algorithm of a CMAC based hybrid- type controller is proposed. The main features of the proposed learning algorithm, as well as the effects of the newly introduced parameters of this algorithm have been studied extensively via simulation case studies. The simulation results showed that the proposed learning algorithm is a robust in stabilizing the control system. Also, this proposed learning algorithm preserved all the known advantages of the CMAC neural network. Part II of this work is dedicated to validate the effectiveness of the proposed CMAC learning algorithm experimentally.


Algorithms ◽  
2018 ◽  
Vol 11 (9) ◽  
pp. 139 ◽  
Author(s):  
Ioannis Livieris ◽  
Andreas Kanavos ◽  
Vassilis Tampakas ◽  
Panagiotis Pintelas

Semi-supervised learning algorithms have become a topic of significant research as an alternative to traditional classification methods which exhibit remarkable performance over labeled data but lack the ability to be applied on large amounts of unlabeled data. In this work, we propose a new semi-supervised learning algorithm that dynamically selects the most promising learner for a classification problem from a pool of classifiers based on a self-training philosophy. Our experimental results illustrate that the proposed algorithm outperforms its component semi-supervised learning algorithms in terms of accuracy, leading to more efficient, stable and robust predictive models.


Author(s):  
T.K. Biryukova

Classic neural networks suppose trainable parameters to include just weights of neurons. This paper proposes parabolic integrodifferential splines (ID-splines), developed by author, as a new kind of activation function (AF) for neural networks, where ID-splines coefficients are also trainable parameters. Parameters of ID-spline AF together with weights of neurons are vary during the training in order to minimize the loss function thus reducing the training time and increasing the operation speed of the neural network. The newly developed algorithm enables software implementation of the ID-spline AF as a tool for neural networks construction, training and operation. It is proposed to use the same ID-spline AF for neurons in the same layer, but different for different layers. In this case, the parameters of the ID-spline AF for a particular layer change during the training process independently of the activation functions (AFs) of other network layers. In order to comply with the continuity condition for the derivative of the parabolic ID-spline on the interval (x x0, n) , its parameters fi (i= 0,...,n) should be calculated using the tridiagonal system of linear algebraic equations: To solve the system it is necessary to use two more equations arising from the boundary conditions for specific problems. For exam- ple the values of the grid function (if they are known) in the points (x x0, n) may be used for solving the system above: f f x0 = ( 0) , f f xn = ( n) . The parameters Iii+1 (i= 0,...,n−1 ) are used as trainable parameters of neural networks. The grid boundaries and spacing of the nodes of ID-spline AF are best chosen experimentally. The optimal selection of grid nodes allows improving the quality of results produced by the neural network. The formula for a parabolic ID-spline is such that the complexity of the calculations does not depend on whether the grid of nodes is uniform or non-uniform. An experimental comparison of the results of image classification from the popular FashionMNIST dataset by convolutional neural 0, x< 0 networks with the ID-spline AFs and the well-known ReLUx( ) =AF was carried out. The results reveal that the usage x x, ≥ 0 of the ID-spline AFs provides better accuracy of neural network operation than the ReLU AF. The training time for two convolutional layers network with two ID-spline AFs is just about 2 times longer than with two instances of ReLU AF. Doubling of the training time due to complexity of the ID-spline formula is the acceptable price for significantly better accuracy of the network. Wherein the difference of an operation speed of the networks with ID-spline and ReLU AFs will be negligible. The use of trainable ID-spline AFs makes it possible to simplify the architecture of neural networks without losing their efficiency. The modification of the well-known neural networks (ResNet etc.) by replacing traditional AFs with ID-spline AFs is a promising approach to increase the neural network operation accuracy. In a majority of cases, such a substitution does not require to train the network from scratch because it allows to use pre-trained on large datasets neuron weights supplied by standard software libraries for neural network construction thus substantially shortening training time.


Sign in / Sign up

Export Citation Format

Share Document