A new efficient training strategy for deep neural networks by hybridization of artificial bee colony and limited–memory BFGS optimization algorithms

2017 ◽  
Vol 266 ◽  
pp. 506-526 ◽  
Author(s):  
Hasan Badem ◽  
Alper Basturk ◽  
Abdullah Caliskan ◽  
Mehmet Emin Yuksel
2020 ◽  
pp. 1-13
Author(s):  
Gokul Chandrasekaran ◽  
P.R. Karthikeyan ◽  
Neelam Sanjeev Kumar ◽  
Vanchinathan Kumarasamy

Test scheduling of System-on-Chip (SoC) is a major problem solved by various optimization techniques to minimize the cost and testing time. In this paper, we propose the application of Dragonfly and Ant Lion Optimization algorithms to minimize the test cost and test time of SoC. The swarm behavior of dragonfly and hunting behavior of Ant Lion optimization methods are used to optimize the scheduling time in the benchmark circuits. The proposed algorithms are tested on p22810 and d695 ITC’02 SoC benchmark circuits. The results of the proposed algorithms are compared with other algorithms like Ant Colony Optimization, Modified Ant Colony Optimization, Artificial Bee Colony, Modified Artificial Bee Colony, Firefly, Modified Firefly, and BAT algorithms to highlight the benefits of test time minimization. It is observed that the test time obtained for Dragonfly and Ant Lion optimization algorithms is 0.013188 Sec for D695, 0.013515 Sec for P22810, and 0.013432 Sec for D695, 0.013711 Sec for P22810 respectively with TAM Width of 64, which is less as compared to the other well-known optimization algorithms.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Tiago Pereira ◽  
Maryam Abbasi ◽  
Bernardete Ribeiro ◽  
Joel P. Arrais

AbstractIn this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy.


2020 ◽  
Vol 8 (4) ◽  
pp. 469
Author(s):  
I Gusti Ngurah Alit Indrawan ◽  
I Made Widiartha

Artificial Neural Networks or commonly abbreviated as ANN is one branch of science from the field of artificial intelligence which is often used to solve various problems in fields that involve grouping and pattern recognition. This research aims to classify Letter Recognition datasets using Artificial Neural Networks which are weighted optimally using the Artificial Bee Colony algorithm. The best classification accuracy results from this study were 92.85% using a combination of 4 hidden layers with each hidden layer containing 10 neurons.


Author(s):  
Derya Soydaner

In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behavior of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.


2018 ◽  
Vol 17 (04) ◽  
pp. 1007-1046 ◽  
Author(s):  
Mohsen Moradi ◽  
Samad Nejatian ◽  
Hamid Parvin ◽  
Vahideh Rezaie

The swarm intelligence optimization algorithms are used widely in static purposes and applications. They solve the static optimization problems successfully. However, most of the recent optimization problems in the real world have a dynamic nature. Thus, an optimization algorithm is required to solve the problems in dynamic environments as well. The dynamic optimization problems indicate the ones whose solutions change over time. The artificial bee colony algorithm is one of the swarm intelligence optimization algorithms. In this study, a clustering and memory-based chaotic artificial bee colony algorithm, denoted by CMCABC, has been proposed for solving the dynamic optimization problems. A chaotic system has a more accurate prediction for future in the real-world applications compared to a random system, because in the real-world chaotic behaviors have emerged, but random behaviors havenot been observed. In the proposed CMCABC method, explicit memory has been used to save the previous good solutions which are not very old. Maintaining diversity in the dynamic environments is one of the fundamental challenges while solving the dynamic optimization problems. Using clustering technique in the proposed method can well maintain the diversity of the problem environment. The proposed CMCABC method has been tested on the moving peaks benchmark (MPB). The MPB is a good simulator to evaluate the efficiency of the optimization algorithms in dynamic environments. The experimental results on the MPB reveal the appropriate efficiency of the proposed CMCABC method compared to the other state-of-the-art methods in solving dynamic optimization problems.


Sign in / Sign up

Export Citation Format

Share Document