Estudo de algoritmos de otimização inspirados na natureza aplicados ao treinamento de redes neurais artificiais
Artificial Neural Networks are a popular machine learning and artificial intelligence technique, proposed since the 1950s. Among their greatest challenges is the training of parameters such as weights, parameters of the activation functions and constants, as well as their yperparameters, such as network architecture and density of neurons per layer. Among the best known algorithms for parametric optimization of networks are Adam and BP, applied mainly in popular architectures such as MLP, RNN, LSTM, Feed-forward Neural Network (FNN), RBFNN, among many others. Recently, the great success of deep neural networks, known as Deep Learnings, as well as fully connected networks, has faced problems with training time and the use of specialized hardware. These challenges gave new impetus to the use of optimization algorithms for the training of these networks, and more recently to the algorithms inspired by nature, also called as NI. This strategy, although not a recent technique, has not yet received much attention from researchers, requiring today a greater number of experimental tests and evaluation, mainly due to the recent appearance of a much larger range of algorithms NI. Some of the elements that need attention, especially for the most recent NI, are mainly related to the time of convergence and studies on the use of different cost functions. Thus, the present master’s dissertation aims to perform tests, comparisons, and studies on algorithms NI applied to the training of neural networks. Both traditional and recent NI algorithms were tested, from many perspectives, including convergence time and cost functions, elements that until now have received little attention from researchers in previous tests. The results showed that the use of NI algorithms for the training of traditional RNAs obtained results with good classification, similar to popular algorithms such as Adam and BPMA, but surpassing these algorithms in terms of convergence time in 20 up to 70%, depending on the network and the parameters involved. This indicates that the strategy of using NI algorithms, especially the most recent ones, for training neural networks is a promising method that can impact the time and quality of the results of recent and future machine learning applications and artificial intelligence