Function Space BFGS Quasi-Newton Learning Algorithm for Time-Varying Recurrent Neural Networks
This paper describes a new learning algorithm for time-varying recurrent neural networks whose weights are functions of time instead of scalars. First, an objective functional that is a function of the weight functions quantifying the discrepancies between the desired outputs and the network’s outputs is formulated. Then, dynamical optimization is used to derive the necessary conditions for the extreme of the functional. These necessary conditions result in a two-point boundary-value problem. This two-point boundary-value problem is subsequently solved by the Hilbert function space BFGS quasi-Newton algorithm, which is obtained by using the dyadic operator to extend the Euclidean space BFGS method into an infinite-dimensional, real Hilbert space. Finally, the ability of the network and the learning algorithm is demonstrated in the identification of three simulated nonlinear systems and a resistance spot welding process.