Training Algorithms

The process of assigning the weight to each connection is called training. A network can be subject to supervised or unsupervised training. In this chapter, supervised and unsupervised learning are explained and then various training algorithms such as multilayer perceptron (MLP) and Back Propagation (BP) as supervised training algorithms are introduced. The unsupervised training algorithm, namely Kohonen's self-organizing map (SOM), is introduced as one of most popular neural network models. SOMs convert high-dimensional, non-linear statistical relationships into simple geometric relationships in an n-dimensional array.

2010 ◽  
Vol 20-23 ◽  
pp. 630-635
Author(s):  
Qiang Liu ◽  
Ning Wang ◽  
Yi Hui Liu ◽  
Shao Qing Wang ◽  
Jin Yong Cheng ◽  
...  

31P MRS(31Phosphorus Magnetic Resonance Spectroscopy) is a non invasive protocol for analyzing the energetic metabolism and biomedical changes in cellular level. Evaluation of 31P MRS is important in diagnosis and treatment of many hepatic diseases. In this paper, we apply back-propagation neural network (BP) and self-organizing map (SOM) neural network to analyze 31P MRS data to distinguish three diagnostic classes of cancer, normal and cirrhosis tissue. 66 samples of 31P MRS data are selected including cancer, normal and cirrhosis tissue. Four experiments are carried out. Good performance is achieved with limited samples. Experimental results prove that neural network models based on 31P MRS data offer an alternative and promising technique for diagnostic prediction of liver cancer in vivo.


2012 ◽  
Vol 6-7 ◽  
pp. 1055-1060 ◽  
Author(s):  
Yang Bing ◽  
Jian Kun Hao ◽  
Si Chang Zhang

In this study we apply back propagation Neural Network models to predict the daily Shanghai Stock Exchange Composite Index. The learning algorithm and gradient search technique are constructed in the models. We evaluate the prediction models and conclude that the Shanghai Stock Exchange Composite Index is predictable in the short term. Empirical study shows that the Neural Network models is successfully applied to predict the daily highest, lowest, and closing value of the Shanghai Stock Exchange Composite Index, but it can not predict the return rate of the Shanghai Stock Exchange Composite Index in short terms.


2005 ◽  
Vol 128 (3) ◽  
pp. 444-454 ◽  
Author(s):  
M. Venturini

In the paper, self-adapting models capable of reproducing time-dependent data with high computational speed are investigated. The considered models are recurrent feed-forward neural networks (RNNs) with one feedback loop in a recursive computational structure, trained by using a back-propagation learning algorithm. The data used for both training and testing the RNNs have been generated by means of a nonlinear physics-based model for compressor dynamic simulation, which was calibrated on a multistage axial-centrifugal small size compressor. The first step of the analysis is the selection of the compressor maneuver to be used for optimizing RNN training. The subsequent step consists in evaluating the most appropriate RNN structure (optimal number of neurons in the hidden layer and number of outputs) and RNN proper delay time. Then, the robustness of the model response towards measurement uncertainty is ascertained, by comparing the performance of RNNs trained on data uncorrupted or corrupted with measurement errors with respect to the simulation of data corrupted with measurement errors. Finally, the best RNN model is tested on field data taken on the axial-centrifugal compressor on which the physics-based model was calibrated, by comparing physics-based model and RNN predictions against measured data. The comparison between RNN predictions and measured data shows that the agreement can be considered acceptable for inlet pressure, outlet pressure and outlet temperature, while errors are significant for inlet mass flow rate.


2022 ◽  
pp. 913-932
Author(s):  
G. Vimala Kumari ◽  
G. Sasibhushana Rao ◽  
B. Prabhakara Rao

This article presents an image compression method using feed-forward back-propagation neural networks (NNs). Marked progress has been made in the area of image compression in the last decade. Image compression removing redundant information in image data is a solution for storage and data transmission problems for huge amounts of data. NNs offer the potential for providing a novel solution to the problem of image compression by its ability to generate an internal data representation. A comparison among various feed-forward back-propagation training algorithms was presented with different compression ratios and different block sizes. The learning methods, the Levenberg Marquardt (LM) algorithm and the Gradient Descent (GD) have been used to perform the training of the network architecture and finally, the performance is evaluated in terms of MSE and PSNR using medical images. The decompressed results obtained using these two algorithms are computed in terms of PSNR and MSE along with performance plots and regression plots from which it can be observed that the LM algorithm gives more accurate results than the GD algorithm.


1998 ◽  
Vol 16 (2) ◽  
pp. 223-241 ◽  
Author(s):  
Petri Toiviainen ◽  
Mari Tervaniemi ◽  
Jukka Louhivuori ◽  
Marieke Saher ◽  
Minna Huotilainen ◽  
...  

The present study compared the degree of similarity of timbre representations as observed with brain recordings, behavioral studies, and computer simulations. To this end, the electrical brain activity of subjects was recorded while they were repetitively presented with five sounds differing in timbre. Subjects read simultaneously so that their attention was not focused on the sounds. The brain activity was quantified in terms of a change-specific mismatch negativity component. Thereafter, the subjects were asked to judge the similarity of all pairs along a five-step scale. A computer simulation was made by first training a Kohonen self-organizing map with a large set of instrumental sounds. The map was then tested with the experimental stimuli, and the distance between the most active artificial neurons was measured. The results of these methods were highly similar, suggesting that timbre representations reflected in behavioral measures correspond to neural activity, both as measured directly and as simulated in self-organizing neural network models.


Materials ◽  
2019 ◽  
Vol 12 (22) ◽  
pp. 3708 ◽  
Author(s):  
In-Ji Han ◽  
Tian-Feng Yuan ◽  
Jin-Young Lee ◽  
Young-Soo Yoon ◽  
Joong-Hoon Kim

A new hybrid intelligent model was developed for estimating the compressive strength (CS) of ground granulated blast furnace slag (GGBFS) concrete, and the synergistic benefits of the hybrid algorithm as compared with a single algorithm were verified. While using the collected 269 data from previous experimental studies, artificial neural network (ANN) models with three different learning algorithms namely back-propagation (BP), particle swarm optimization (PSO), and new hybrid PSO-BP algorithms, were constructed and the performance of the models was evaluated with regard to the prediction accuracy, efficiency, and stability through a threefold procedure. It was found that the PSO-BP neural network model was superior to the simple ANNs that were trained by a single algorithm and it is suitable for predicting the CS of GGBFS concrete.


Nanophotonics ◽  
2017 ◽  
Vol 6 (3) ◽  
pp. 561-576 ◽  
Author(s):  
Guy Van der Sande ◽  
Daniel Brunner ◽  
Miguel C. Soriano

AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.


Sign in / Sign up

Export Citation Format

Share Document