backpropagation algorithm
Recently Published Documents


TOTAL DOCUMENTS

443
(FIVE YEARS 121)

H-INDEX

30
(FIVE YEARS 5)

Author(s):  
Vladimir Milic ◽  
Srecko Arandia-Kresic ◽  
Mihael Lobrovic

This paper is concerned with the synthesis of proportional–integral–derivative (PID) controller according to the [Formula: see text] optimality criterion for seesaw-cart system. The equations of dynamics are obtained through modelling a seesaw-cart system actuated by direct-current motor via rack and pinion mechanism using the Euler–Lagrange approach. The obtained model is linearised and synthesis of the PID controller for linear model is performed. An algorithm based on the sub-gradient method, the Newton method, the self-adapting backpropagation algorithm and the Adams method is proposed to calculate the PID controller gains. The proposed control strategy is tested and compared with standard linear matrix inequality (LMI)-based method on computer simulations and experimentally on a laboratory model.


Athenea ◽  
2021 ◽  
Vol 2 (5) ◽  
pp. 29-34
Author(s):  
Alexander Caicedo ◽  
Anthony Caicedo

The era of the technological revolution increasingly encourages the development of technologies that facilitate in one way or another people's daily activities, thus generating a great advance in information processing. The purpose of this work is to implement a neural network that allows classifying the emotional states of a person based on the different human gestures. A database is used with information on students from the PUCE-E School of Computer Science and Engineering. Said information are images that express the gestures of the students and with which the comparative analysis with the input data is carried out. The environment in which this work converges proposes that the implementation of this project be carried out under the programming of a multilayer neuralnetwork. Multilayer feeding neural networks possess a number of properties that make them particularly suitable for complex pattern classification problems [8]. Back-Propagation [4], which is a backpropagation algorithm used in the Feedforward neural network, was taken into consideration to solve the classification of emotions. Keywords: Image processing, neural networks, gestures, back-propagation, feedforward, classification, emotions. References [1]S. Gangwar, S. Shukla, D. Arora. “Human Emotion Recognition by Using Pattern Recognition Network”, Journal of Engineering Research and Applications, Vol. 3, Issue 5, pp.535-539, 2013. [2]K. Rohit. “Back Propagation Neural Network based Emotion Recognition System”. International Journal of Engineering Trends and Technology (IJETT), Vol. 22, Nº 4, 2015. [3]S. Eishu, K. Ranju, S. Malika, “Speech Emotion Recognition using BFO and BPNN”, International Journal of Advances in Science and Technology (IJAST), ISSN2348-5426, Vol. 2 Issue 3, 2014. [4]A. Fiszelew, R. García-Martínez and T. de Buenos Aires. “Generación automática de redes neuronales con ajuste de parámetros basado en algoritmos genéticos”. Revista del Instituto Tecnológico de Buenos Aires, 26, 76-101, 2002. [5]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. “Handwritten digit recognition with a back-propagation network”. In Advances in neural information processing systems. pp. 396-404, 1990. [6]G. Bebis and M. Georgiopoulos. “Feed-forward neural networks”. IEEE Potentials, 13(4), 27-31, 1994. [7]G. Huang, Q. Zhu and C. Siew. “Extreme learning machine: a new learning scheme of feedforward neural networks”. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference. Vol. 2, pp. 985-990. IEEE, 2004. [8]D. Montana and L. Davis. “Training Feedforward Neural Networks Using Genetic Algorithms”. In IJCAI, Vol. 89, pp. 762-767, 1989. [9]I. Sutskever, O. Vinyals and Q. Le. “Sequence to sequence learning with neural networks”. In Advances in neural information processing systems. pp. 3104-3112, 2014. [10]J. Schmidhuber. “Deep learning in neural networks: An overview”. Neural networks, 61, 85-117, 2015. [11]R. Santos, M. Ruppb, S. Bonzi and A. Filetia, “Comparación entre redes neuronales feedforward de múltiples capas y una red de función radial para detectar y localizar fugas en tuberías que transportan gas”. Chem. Ing.Trans , 32 (1375), e1380, 2013.


2021 ◽  
pp. 1-20
Author(s):  
Shao-Qun Zhang ◽  
Zhi-Hua Zhou

Abstract Current neural networks are mostly built on the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons. This letter proposes the flexible transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity. The FT model employs a pair of parameters to model the neurotransmitters between neurons and puts up a neuron-exclusive variable to record the regulated neurotrophin density. Thus, the FT model can be formulated as a two-variable, two-valued function, taking the commonly used MP neuron model as its particular case. This modeling manner makes the FT model biologically more realistic and capable of handling complicated data, even spatiotemporal data. To exhibit its power and potential, we present the flexible transmitter network (FTNet), which is built on the most common fully connected feedforward architecture taking the FT model as the basic building block. FTNet allows gradient calculation and can be implemented by an improved backpropagation algorithm in the complex-valued domain. Experiments on a broad range of tasks show that FTNet has power and potential in processing spatiotemporal data. This study provides an alternative basic building block in neural networks and exhibits the feasibility of developing artificial neural networks with neuronal plasticity.


Author(s):  
Urszula Markowska-Kaczmar ◽  
Michał Kosturek

AbstractOur research is devoted to answering whether randomisation-based learning can be fully competitive with the classical feedforward neural networks trained using backpropagation algorithm for classification and regression tasks. We chose extreme learning as an example of randomisation-based networks. The models were evaluated in reference to training time and achieved efficiency. We conducted an extensive comparison of these two methods for various tasks in two scenarios: $$\bullet$$ ∙ using comparable network capacity and $$\bullet$$ ∙ using network architectures tuned for each model. The comparison was conducted on multiple datasets from public repositories and some artificial datasets created for this research. Overall, the experiments covered more than 50 datasets. Suitable statistical tests supported the results. They confirm that for relatively small datasets, extreme learning machines (ELM) are better than networks trained by the backpropagation algorithm. But for demanding image datasets, like ImageNet, ELM is not competitive to modern networks trained by backpropagation; therefore, in order to properly address current practical needs in pattern recognition entirely, ELM needs further development. Based on our experience, we postulate to develop smart algorithms for the inverse matrix calculation, so that determining weights for challenging datasets becomes feasible and memory efficient. There is a need to create specific mechanisms to avoid keeping the whole dataset in memory to compute weights. These are the most problematic elements in ELM processing, establishing the main obstacle in the widespread ELM application.


Author(s):  
Cholid Fauzi ◽  
Aly Dzulfikar

Product sales forecasting is used by companies to estimate or predict future sales levels using sales data in the previous year. The Artificial Neural Network Backpropagation Algorithm can forecast the sales of goods for the next period for each item in the company. The forecasting process begins by determining the variables needed in the network pattern, and then the established network pattern continued in the network training process using the backpropagation algorithm. After carrying out the network training process, the researcher comparisons with several network patterns formed. This research was conducted to discuss the forecasting analysis of PT XYZ products on spiral and leaf springs. Forecasting carried out on Toyota 48210-25290 R3 type leaf springs using the Artificial Neural Network Backpropagation method with a learning rate weight value of 0.1 hidden layers four and an error of 0.01. From the data processing analysis that has been carried out based on the weight parameters selected, the prediction of sales in April.


2021 ◽  
Author(s):  
Alpha Renner ◽  
Forrest Sheldon ◽  
Anatoly Zlotnik ◽  
Louis Tao ◽  
Andrew Sornborger

Abstract The capabilities of natural neural systems have inspired new generations of machine learning algorithms as well as neuromorphic very large-scale integrated (VLSI) circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. In this study, we present a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing, implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset. To our knowledge, this is the first work to show a Spiking Neural Network (SNN) implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.


2021 ◽  
Author(s):  
Amir Valizadeh

Abstract In this paper, an alternative approach to the conventional backpropagation algorithm is presented that results in faster convergence of the loss of neural network models. The new algorithm (called optimized propagation) was used to model a neural network that was trained on some data and results were compared with the same data being modeled using the conventional backpropagation algorithm. These results indicate the superiority of optimized propagation to conventional backpropagation in the terms of reducing the loss of the model faster.


Sign in / Sign up

Export Citation Format

Share Document