training algorithm
Recently Published Documents


TOTAL DOCUMENTS

790
(FIVE YEARS 172)

H-INDEX

37
(FIVE YEARS 5)

2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

This paper plans to develop a novel image compression model with four major phases. (i) Segmentation (ii) Feature Extraction (iii) ROI classification (iv) Compression. The image is segmented into two regions by Adaptive ACM. The result of ACM is the production of two regions, this model enables separate ROI classification phase. For performing this, the features corresponding to GLCM are extracted from the segmented parts. Further, they are subjected to classification via NN, in which new training algorithm is adopted. As a main novelty JA and WOA are merged together to form J-WOA with the aim of tuning the ACM (weighting factor and maximum iteration), and training algorithm of NN, where the weights are optimized. This model is referred as J-WOA-NN. This classification model exactly classifies the ROI regions. During the compression process, the ROI regions are handled by JPEG-LS algorithm and the non-ROI region are handled by wavelet-based lossy compression algorithm. Finally, the decompression model is carried out by adopting the same reverse process.


2022 ◽  
pp. 227-241
Author(s):  
Kuruge Darshana Abeyrathna ◽  
Chawalit Jeenanunta

This research proposes a new training algorithm for artificial neural networks (ANNs) to improve the short-term load forecasting (STLF) performance. The proposed algorithm overcomes the so-called training issue in ANNs, where it traps in local minima, by applying genetic algorithm operations in particle swarm optimization when it converges to local minima. The training ability of the hybridized training algorithm is evaluated using load data gathered by Electricity Generating Authority of Thailand. The ANN is trained using the new training algorithm with one-year data to forecast equal 48 periods of each day in 2013. During the testing phase, a mean absolute percentage error (MAPE) is used to evaluate performance of the hybridized training algorithm and compare them with MAPEs from Backpropagation, GA, and PSO. Yearly average MAPE and the average MAPEs for weekdays, Mondays, weekends, Holidays, and Bridging holidays show that PSO+GA algorithm outperforms other training algorithms for STLF.


2021 ◽  
Vol 12 (1) ◽  
pp. 292
Author(s):  
Yunyong Ko ◽  
Sang-Wook Kim

The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming. To speed up the training of massive DNN models, data-parallel distributed training based on the parameter server (PS) has been widely applied. In general, a synchronous PS-based training suffers from the synchronization overhead, especially in heterogeneous environments. To reduce the synchronization overhead, asynchronous PS-based training employs the asynchronous communication between PS and workers so that PS processes the request of each worker independently without waiting. Despite the performance improvement of asynchronous training, however, it inevitably incurs the difference among the local models of workers, where such a difference among workers may cause slower model convergence. Fro addressing this problem, in this work, we propose a novel asynchronous PS-based training algorithm, SHAT that considers (1) the scale of distributed training and (2) the heterogeneity among workers for successfully reducing the difference among the local models of workers. The extensive empirical evaluation demonstrates that (1) the model trained by SHAT converges to the higher accuracy up to 5.22% than state-of-the-art algorithms, and (2) the model convergence of SHAT is robust under various heterogeneous environments.


2021 ◽  
Vol 6 (4 (114)) ◽  
pp. 21-27
Author(s):  
Vasyl Lytvyn ◽  
Roman Peleshchak ◽  
Ivan Peleshchak ◽  
Oksana Cherniak ◽  
Lyubomyr Demkiv

Large enough structured neural networks are used for solving the tasks to recognize distorted images involving computer systems. One such neural network that can completely restore a distorted image is a fully connected pseudospin (dipole) neural network that possesses associative memory. When submitting some image to its input, it automatically selects and outputs the image that is closest to the input one. This image is stored in the neural network memory within the Hopfield paradigm. Within this paradigm, it is possible to memorize and reproduce arrays of information that have their own internal structure. In order to reduce learning time, the size of the neural network is minimized by simplifying its structure based on one of the approaches: underlying the first is «regularization» while the second is based on the removal of synaptic connections from the neural network. In this work, the simplification of the structure of a fully connected dipole neural network is based on the dipole-dipole interaction between the nearest adjacent neurons of the network. It is proposed to minimize the size of a neural network through dipole-dipole synaptic connections between the nearest neurons, which reduces the time of the computational resource in the recognition of distorted images. The ratio for weight coefficients of synaptic connections between neurons in dipole approximation has been derived. A training algorithm has been built for a dipole neural network with sparse synaptic connections, which is based on the dipole-dipole interaction between the nearest neurons. A computer experiment was conducted that showed that the neural network with sparse dipole connections recognizes distorted images 3 times faster (numbers from 0 to 9, which are shown at 25 pixels), compared to a fully connected neural network


2021 ◽  
Vol 233 ◽  
pp. 107509
Author(s):  
Cheng Tang ◽  
Yuki Todo ◽  
Junkai Ji ◽  
Qiuzhen Lin ◽  
Zheng Tang

Author(s):  
Morteza Jouyban ◽  
Mahdie Khorashadizade

In this paper we proposed a novel procedure for training a feedforward neural network. The accuracy of artificial neural network outputs after determining the proper structure for each problem depends on choosing the appropriate method for determining the best weights, which is the appropriate training algorithm. If the training algorithm starts from a good starting point, it is several steps closer to achieving global optimization. In this paper, we present an optimization strategy for selecting the initial population and determining the optimal weights with the aim of minimizing neural network error. Teaching-learning-based optimization (TLBO) is a less parametric algorithm rather than other evolutionary algorithms, so it is easier to implement. We have improved this algorithm to increase efficiency and balance between global and local search. The improved teaching-learning-based optimization (ITLBO) algorithm has added the concept of neighborhood to the basic algorithm, which improves the ability of global search. Using an initial population that includes the best cluster centers after clustering with the modified k-mean algorithm also helps the algorithm to achieve global optimum. The results are promising, close to optimal, and better than other approach which we compared our proposed algorithm with them.


2021 ◽  
Author(s):  
Matheus Dias Gama ◽  
Stéphane Julia ◽  
Rita Maria Silva Julia

Frequentemente, a complexidade da política de distribuição dos algoritmos distribuídos os tornam extremamente hostis a consagradas abordagens matemáticas de análise de desempenho, tais como análise assintótica, técnicas de recorrências e análise probabilística. Isso se deve ao fato de que tais métodos não provêm recursos adequados que lhes permitam avaliar o quão o gradual aumento do número de processadores impacta no tempo de execução do algoritmo. Diante disso, este artigo propõe uma abordagem visual e formal, baseada em simulações automáticas de modelos de Redes de Petri Coloridas Hierárquicas no ambiente gráfico Colored Petri Nets Tools (CPN Tools), para avaliar o speedup e o ponto de saturação de processadores de algoritmos distribuídos usados em Inteligência Artificial. Será usado como estudo de caso o algoritmo de treinamento dos Perceptrons de Múltiplas Camadas baseado na retro-propagação do erro.


2021 ◽  
Author(s):  
Shuaijun Li ◽  
Jia Lu

Abstract Self-training algorithm can quickly train an supervised classifier through a few labeled samples and lots of unlabeled samples. However, self-training algorithm is often affected by mislabeled samples, and local noise filter is proposed to detect the mislabeled samples. Nevertheless, current local noise filters have the problems: (a) Current local noise filters ignore the spatial distribution of the nearest neighbors in different classes. (b) They can’t perform well when mislabeled samples are located in the overlapping areas of different classes. To solve the above challenges, a new self-training algorithm based on density peaks combining globally adaptive multi-local noise filter (STDP-GAMNF) is proposed. Firstly, the spatial structure of data set is revealed by density peak clustering, and it is used for helping self-training to label unlabeled samples. In the meantime, after each epoch of labeling, GAMLNF can comprehensively judge whether a sample is a mislabeled sample from multiple classes or not, and will reduce the influence of edge samples effectively. The corresponding experimental results conducted on eighteen real-world data sets demonstrate that GAMLNF is not sensitive to the value of the neighbor parameter k, and it can be adaptive to find the appropriate number of neighbors of each class.


Sign in / Sign up

Export Citation Format

Share Document