scholarly journals Evolutionary Spiking Neural Networks for Solving Supervised Classification Problems

2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
G. López-Vázquez ◽  
M. Ornelas-Rodriguez ◽  
A. Espinal ◽  
J. A. Soria-Alcaraz ◽  
A. Rojas-Domínguez ◽  
...  

This paper presents a grammatical evolution (GE)-based methodology to automatically design third generation artificial neural networks (ANNs), also known as spiking neural networks (SNNs), for solving supervised classification problems. The proposal performs the SNN design by exploring the search space of three-layered feedforward topologies with configured synaptic connections (weights and delays) so that no explicit training is carried out. Besides, the designed SNNs have partial connections between input and hidden layers which may contribute to avoid redundancies and reduce the dimensionality of input feature vectors. The proposal was tested on several well-known benchmark datasets from the UCI repository and statistically compared against a similar design methodology for second generation ANNs and an adapted version of that methodology for SNNs; also, the results of the two methodologies and the proposed one were improved by changing the fitness function in the design process. The proposed methodology shows competitive and consistent results, and the statistical tests support the conclusion that the designs produced by the proposal perform better than those produced by other methodologies.

2022 ◽  
pp. 166-201
Author(s):  
Asha Gowda Karegowda ◽  
Devika G.

Artificial neural networks (ANN) are often more suitable for classification problems. Even then, training of ANN is a surviving challenge task for large and high dimensional natured search space problems. These hitches are more for applications that involves process of fine tuning of ANN control parameters: weights and bias. There is no single search and optimization method that suits the weights and bias of ANN for all the problems. The traditional heuristic approach fails because of their poorer convergence speed and chances of ending up with local optima. In this connection, the meta-heuristic algorithms prove to provide consistent solution for optimizing ANN training parameters. This chapter will provide critics on both heuristics and meta-heuristic existing literature for training neural networks algorithms, applicability, and reliability on parameter optimization. In addition, the real-time applications of ANN will be presented. Finally, future directions to be explored in the field of ANN are presented which will of potential interest for upcoming researchers.


2009 ◽  
Vol 19 (04) ◽  
pp. 295-308 ◽  
Author(s):  
SAMANWOY GHOSH-DASTIDAR ◽  
HOJJAT ADELI

Most current Artificial Neural Network (ANN) models are based on highly simplified brain dynamics. They have been used as powerful computational tools to solve complex pattern recognition, function estimation, and classification problems. ANNs have been evolving towards more powerful and more biologically realistic models. In the past decade, Spiking Neural Networks (SNNs) have been developed which comprise of spiking neurons. Information transfer in these neurons mimics the information transfer in biological neurons, i.e., via the precise timing of spikes or a sequence of spikes. To facilitate learning in such networks, new learning algorithms based on varying degrees of biological plausibility have also been developed recently. Addition of the temporal dimension for information encoding in SNNs yields new insight into the dynamics of the human brain and could result in compact representations of large neural networks. As such, SNNs have great potential for solving complicated time-dependent pattern recognition problems because of their inherent dynamic representation. This article presents a state-of-the-art review of the development of spiking neurons and SNNs, and provides insight into their evolution as the third generation neural networks.


2012 ◽  
Vol 22 (01) ◽  
pp. 77-87 ◽  
Author(s):  
M. A. H. AKHAND ◽  
K. MURASE

An ensemble performs well when the component classifiers are diverse yet accurate, so that the failure of one is compensated for by others. A number of methods have been investigated for constructing ensemble in which some of them train classifiers with the generated patterns. This study investigates a new technique of training pattern generation. The method alters input feature values of some patterns using the values of other patterns to generate different patterns for different classifiers. The effectiveness of neural network ensemble based on the proposed technique was evaluated using a suite of 25 benchmark classification problems, and was found to achieve performance better than or competitive with related conventional methods. Experimental investigation of different input values alteration techniques finds that alteration with pattern values in the same class is better for generalization, although other alteration techniques may offer more diversity.


2020 ◽  
Author(s):  
Friedemann Zenke ◽  
Tim P. Vogels

AbstractBrains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. In comparison, the functional capabilities of models of spiking networks are still rudimentary. This shortcoming is mainly due to the lack of insight and practical algorithms to construct the necessary connectivity. Any such algorithm typically attempts to build networks by iteratively reducing the error compared to a desired output. But assigning credit to hidden units in multi-layered spiking networks has remained challenging due to the non-differentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity in spiking network models. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients impact learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative’s scale can substantially affect learning performance. When we combine surrogate gradients with a suitable activity regularization technique, robust information processing can be achieved in spiking networks even at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2705
Author(s):  
Nebojsa Bacanin ◽  
Ruxandra Stoean ◽  
Miodrag Zivkovic ◽  
Aleksandar Petrovic ◽  
Tarik A. Rashid ◽  
...  

Swarm intelligence techniques have been created to respond to theoretical and practical global optimization problems. This paper puts forward an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method, by an explicit exploration mechanism and a chaotic local search strategy. The resulting augmented approach was theoretically tested on two sets of bound-constrained benchmark functions from the CEC suites and practically validated for automatically selecting the optimal dropout rate for the regularization of deep neural networks. Despite their successful applications in a wide spectrum of different fields, one important problem that deep learning algorithms face is overfitting. The traditional way of preventing overfitting is to apply regularization; the first option in this sense is the choice of an adequate value for the dropout parameter. In order to demonstrate its ability in finding an optimal dropout rate, the boosted version of the firefly algorithm has been validated for the deep learning subfield of convolutional neural networks, with respect to five standard benchmark datasets for image processing: MNIST, Fashion-MNIST, Semeion, USPS and CIFAR-10. The performance of the proposed approach in both types of experiments was compared with other recent state-of-the-art methods. To prove that there are significant improvements in results, statistical tests were conducted. Based on the experimental data, it can be concluded that the proposed algorithm clearly outperforms other approaches.


Author(s):  
Srijan Das ◽  
Arpita Dutta ◽  
Saurav Sharma ◽  
Sangharatna Godboley

Anomaly Detection is an important research domain of Pattern Recognition due to its effects of classification and clustering problems. In this paper, an anomaly detection algorithm is proposed using different primitive cost functions such as Normal Perceptron, Relaxation Criterion, Mean Square Error (MSE) and Ho-Kashyap. These criterion functions are minimized to locate the decision boundary in the data space so as to classify the normal data objects and the anomalous data objects. The authors proposed algorithm uses the concept of supervised classification, though it is very different from solving normal supervised classification problems. This proposed algorithm using different criterion functions has been compared with the accuracy of the Neural Networks (NN) in order to bring out a comparative analysis between them and discuss some advantages.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Chenghao Cai ◽  
Yanyan Xu ◽  
Dengfeng Ke ◽  
Kaile Su

We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including theN-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a relative improvement of 5.60% on phoneme error rates. Further experiments also reveal that mean-normalised SGD facilitates the training processes of DNNs with MSAFs, especially when being with large training sets. The models can also be directly trained without pretraining when the training set is sufficiently large, which results in a considerable relative improvement of 5.82% on word error rates.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 69 ◽  
Author(s):  
Marco Baioletti ◽  
Gabriele Di Bari ◽  
Alfredo Milani ◽  
Valentina Poggioni

In this paper, a Neural Networks optimizer based on Self-adaptive Differential Evolution is presented. This optimizer applies mutation and crossover operators in a new way, taking into account the structure of the network according to a per layer strategy. Moreover, a new crossover called interm is proposed, and a new self-adaptive version of DE called MAB-ShaDE is suggested to reduce the number of parameters. The framework has been tested on some well-known classification problems and a comparative study on the various combinations of self-adaptive methods, mutation, and crossover operators available in literature is performed. Experimental results show that DENN reaches good performances in terms of accuracy, better than or at least comparable with those obtained by backpropagation.


Sign in / Sign up

Export Citation Format

Share Document