scholarly journals Learning Dynamic Generator Model by Alternating Back-Propagation through Time

Author(s):  
Jianwen Xie ◽  
Ruiqi Gao ◽  
Zilong Zheng ◽  
Song-Chun Zhu ◽  
Ying Nian Wu

This paper studies the dynamic generator model for spatialtemporal processes such as dynamic textures and action sequences in video data. In this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. The sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. The non-linear transformation of this transition model can be parametrized by a feedforward neural network. We show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. We show that our training method can learn realistic models for dynamic textures and action patterns.

2017 ◽  
Vol 29 (2) ◽  
pp. 301-337 ◽  
Author(s):  
K. GOULIANAS ◽  
A. MARGARIS ◽  
I. REFANIDIS ◽  
K. DIAMANTARAS

This paper proposes a neural network architecture for solving systems of non-linear equations. A back propagation algorithm is applied to solve the problem, using an adaptive learning rate procedure, based on the minimization of the mean squared error function defined by the system, as well as the network activation function, which can be linear or non-linear. The results obtained are compared with some of the standard global optimization techniques that are used for solving non-linear equations systems. The method was tested with some well-known and difficult applications (such as Gauss–Legendre 2-point formula for numerical integration, chemical equilibrium application, kinematic application, neuropsychology application, combustion application and interval arithmetic benchmark) in order to evaluate the performance of the new approach. Empirical results reveal that the proposed method is characterized by fast convergence and is able to deal with high-dimensional equations systems.


Author(s):  
Sergejs Jakovlevs

Perceptron Architecture Ensuring Pattern Description CompactnessThis paper examines conditions a neural network has to meet in order to ensure the formation of a space of features satisfying the compactness hypothesis. The formulation of compactness hypothesis is defined in more detail as applied to neural networks. It is shown that despite the fact that the first layer of connections is formed randomly, the presence of more than 30 elements in the middle network layer guarantees a 100% probability that the G-matrix of the perceptron will not be special. It means that under additional mathematical calculations made by Rosenblatt, the perceptron will with guaranty form a space of features that could be then linearly divided. Indeed, Cover's theorem only says that separation probability increases when the initial space is transformed into a higher dimensional space in the non-linear case. It however does not point when this probability is 100%. In the Rosenblatt's perceptron, the non-linear transformation is carried out in the first layer which is generated randomly. The paper provides practical conditions under which the probability is very close to 100%. For comparison, in the Rumelhart's multilayer perceptron this kind of analysis is not performed.


2007 ◽  
Vol 24-25 ◽  
pp. 361-370
Author(s):  
Bin Tao ◽  
Xu Yue Wang ◽  
H.Z. Zhen ◽  
Wen Ji Xu

Electrochemical abrasive belt grinding (ECABG) technology, which has the advantage over conventional stone super-finishing, has been applied in bearing raceway super-finishing. However, the finishing effect of ECABG is dominated by many factors, which relationship is so complicated that appears non-linear behavior. Therefore, it is difficult to predict the finishing results and select the processing parameters in ECABG. In this paper, Back-Propagation (BP) neural network is proposed to solve this problem. The non-linear relationship of machining parameters was established based on the experimental data by applying one-hidden layer BP neural networks. The comparison between the calculated results of the BP neural network and experimental results under the corresponding conditions was carried out, and the results indicates that it is feasible to apply BP neural network in determining the processing parameters and forecasting the surface quality effects in ECABG.


Author(s):  
Wanzhong Zhao ◽  
Xiangchuang Kong ◽  
Chunyan Wang

The precise estimation of the battery’s state of charge is one of the most significant and difficult techniques for battery management systems. In order to improve the accuracy of estimation of the state of charge, the forgetting-factor recursive least-squares method is used to achieve online identification of the model parameters based on the first-order RC battery model, and a back-propagation neural-network-assisted adaptive Kalman filter algorithm is proposed. A back-propagation neural network is established by using the MATLAB neural network toolbox and is trained offline on the basis of the battery test data; then the trained back-propagation neural network is used to realize the online optimized results of an adaptive Kalman filter algorithm for estimation of the state of charge. The proposed methodology for estimation of the state of charge is demonstrated using experimental lithium-ion battery module data in dynamic stress tests. The results indicate that, in comparison with the common adaptive Kalman filter algorithm, the back-propagation–adaptive Kalman filter algorithm significantly improved precise estimation of the state of charge.


2019 ◽  
Vol 15 (12) ◽  
pp. 155014771989452
Author(s):  
Shuo Li ◽  
Song Li ◽  
Haifeng Zhao ◽  
Yuan An

In this article, a method for estimating the state of charge of lithium battery based on back-propagation neural network is proposed and implemented for uninterruptible power system. First, back-propagation neural network model is established with voltage, temperature, and charge–discharge current as input parameters, and state of charge of lithium battery as output parameter. Then, the back-propagation neural network is trained by Levenberg–Marquardt algorithm and gradient descent method; and the state of charge of batteries in uninterruptible power system is estimated by the trained back-propagation neural network. Finally, we build a state-of-charge estimation test platform and connect it to host computer by Ethernet. The performance of state-of-charge estimation based on back-propagation neural network is tested by connecting to uninterruptible power system and compared with the ampere-hour counting method and the actual test data. The results show that the state-of-charge estimation based on back-propagation neural network can achieve high accuracy in estimating state of charge of uninterruptible power system and can reduce the error accumulation caused in long-term operation.


2013 ◽  
Vol 2 (1) ◽  
pp. 89-98
Author(s):  
R.F. Wichman ◽  
J. Alexander

Many, if not most, control processes demonstrate non-linear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization.


2005 ◽  
Vol 51 (173) ◽  
pp. 313-323 ◽  
Author(s):  
Daniel Steiner ◽  
A. Walter ◽  
H.J. Zumbühl

AbstractGlacier mass changes are considered to represent key variables related to climate variability. We have reconstructed a proxy for annual mass-balance changes in Grosse Aletschgletscher, Swiss Alps, back to AD 1500 using a non-linear back-propagation neural network (BPN). The model skill of the BPN performs better than reconstructions using conventional stepwise multiple linear regression. The BPN, driven by monthly instrumental series of local temperature and precipitation, provides a proxy for 20th-century mass balance. The long-term mass-balance reconstruction back to 1500 is based on a multi-proxy approach of seasonally resolved temperature and precipitation reconstructions (mean over a specific area) as input variables. The relation between the driving factors (temperature, precipitation) used and the reconstructed mass-balance series is discussed. Mass changes in Grosse Aletschgletscher are shown to be mainly influenced by summer (June–August) temperatures, but winter (December–February) precipitation also seems to contribute. Furthermore, we found a significant non-linear part within the climate–mass-balance relation of Grosse Aletschgletscher.


2019 ◽  
Vol 8 (4) ◽  
pp. 216
Author(s):  
Renas Rajab Asaad ◽  
Rasan I. Ali

Back propagation neural network are known for computing the problems that cannot easily be computed (huge datasets analysis or training) in artificial neural networks. The main idea of this paper is to implement XOR logic gate by ANNs using back propagation neural network for back propagation of errors, and sigmoid activation function. This neural network to map non-linear threshold gate. The non-linear used to classify binary inputs (x1, x2) and passing it through hidden layer for computing coefficient_errors and gradient_errors (Cerrors, Gerrors), after computing errors by (ei = Output_desired- Output_actual) the weights and thetas (ΔWji = (α)(Xj)(gi), Δϴj = (α)(-1)(gi)) are changing according to errors. Sigmoid activation function is = sig(x)=1/(1+e-x) and Derivation of sigmoid is = dsig(x) = sig(x)(1-sig(x)). The sig(x) and Dsig(x) is between 1 to 0.


Sign in / Sign up

Export Citation Format

Share Document