scholarly journals Optimizing connection weights in neural networks using hybrid metaheuristics algorithm

2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

The learning process of artificial neural networks is an important and complex task in the supervised learning field. The main difficulty of training a neural network is the process of fine-tuning the best set of control parameters in terms of weight and bias. This paper presents a new training method based on hybrid particle swarm optimization with Multi-Verse Optimization (PMVO) to train the feedforward neural networks. The hybrid algorithm is utilized to search better in solution space which proves its efficiency in reducing the problems of trapping in local minima. The performance of the proposed approach was compared with five evolutionary techniques and the standard momentum backpropagation and adaptive learning rate. The comparison was benchmarked and evaluated using six bio-medical datasets. The results of the comparative study show that PMVO outperformed other training methods in most datasets and can be an alternative to other training methods.

2021 ◽  
Vol 12 (3) ◽  
pp. 149-171
Author(s):  
Rabab Bousmaha ◽  
Reda Mohamed Hamou ◽  
Abdelmalek Amine

Training feedforward neural network (FFNN) is a complex task in the supervised learning field. An FFNN trainer aims to find the best set of weights that minimizes classification error. This paper presents a new training method based on hybrid bat optimization with self-adaptive differential evolution to train the feedforward neural networks. The hybrid training algorithm combines bat and the self-adaptive differential evolution algorithm called BAT-SDE. BAT-SDE is used to better search in the solution space, which proves its effectiveness in large space solutions. The performance of the proposed approach was compared with eight evolutionary techniques and the standard momentum backpropagation and adaptive learning rate. The comparison was benchmarked and evaluated using seven bio-medical datasets and one large credit card fraud detection dataset. The results of the comparative study show that BAT-SDE outperformed other training methods in most datasets and can be an alternative to other training methods.


2020 ◽  
Vol 34 (04) ◽  
pp. 4452-4459
Author(s):  
Jaedeok Kim ◽  
Chiyoun Park ◽  
Hyun-Joo Jung ◽  
Yoonsuck Choe

Architecture optimization, which is a technique for finding an efficient neural network that meets certain requirements, generally reduces to a set of multiple-choice selection problems among alternative sub-structures or parameters. The discrete nature of the selection problem, however, makes this optimization difficult. To tackle this problem we introduce a novel concept of a trainable gate function. The trainable gate function, which confers a differentiable property to discrete-valued variables, allows us to directly optimize loss functions that include non-differentiable discrete values such as 0-1 selection. The proposed trainable gate can be applied to pruning. Pruning can be carried out simply by appending the proposed trainable gate functions to each intermediate output tensor followed by fine-tuning the overall model, using any gradient-based training methods. So the proposed method can jointly optimize the selection of the pruned channels while fine-tuning the weights of the pruned model at the same time. Our experimental results demonstrate that the proposed method efficiently optimizes arbitrary neural networks in various tasks such as image classification, style transfer, optical flow estimation, and neural machine translation.


2006 ◽  
Vol 16 (07) ◽  
pp. 1929-1950 ◽  
Author(s):  
GEORGE D. MAGOULAS ◽  
MICHAEL N. VRAHATIS

Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.


Sign in / Sign up

Export Citation Format

Share Document