Neural Dynamics and Newton–Raphson Iteration for Nonlinear Optimization

Author(s):  
Dongsheng Guo ◽  
Yunong Zhang

In this paper, a special type of neural dynamics (ND) is generalized and investigated for time-varying and static scalar-valued nonlinear optimization. In addition, for comparative purpose, the gradient-based neural dynamics (or termed gradient dynamics (GD)) is studied for nonlinear optimization. Moreover, for possible digital hardware realization, discrete-time ND (DTND) models are developed. With the linear activation function used and with the step size being 1, the DTND model reduces to Newton–Raphson iteration (NRI) for solving the static nonlinear optimization problems. That is, the well-known NRI method can be viewed as a special case of the DTND model. Besides, the geometric representation of the ND models is given for time-varying nonlinear optimization. Numerical results demonstrate the efficacy and advantages of the proposed ND models for time-varying and static nonlinear optimization.

2021 ◽  
Vol 550 ◽  
pp. 239-251
Author(s):  
Guancheng Wang ◽  
Haoen Huang ◽  
Limei Shi ◽  
Chuhong Wang ◽  
Dongyang Fu ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Min Sun ◽  
Maoying Tian ◽  
Yiju Wang

As a special kind of recurrent neural networks, Zhang neural network (ZNN) has been successfully applied to various time-variant problems solving. In this paper, we present three Zhang et al. discretization (ZeaD) formulas, including a special two-step ZeaD formula, a general two-step ZeaD formula, and a general five-step ZeaD formula, and prove that the special and general two-step ZeaD formulas are convergent while the general five-step ZeaD formula is not zero-stable and thus is divergent. Then, to solve the time-varying nonlinear optimization (TVNO) in real time, based on the Taylor series expansion and the above two convergent two-step ZeaD formulas, we discrete the continuous-time ZNN (CTZNN) model of TVNO and thus get a special two-step discrete-time ZNN (DTZNN) model and a general two-step DTZNN model. Theoretical analyses indicate that the sequence generated by the first DTZNN model is divergent, while the sequence generated by the second DTZNN model is convergent. Furthermore, for the step-size of the second DTZNN model, its tight upper bound and the optimal step-size are also discussed. Finally, some numerical results and comparisons are provided and analyzed to substantiate the efficacy of the proposed DTZNN models.


2018 ◽  
Vol 140 (10) ◽  
Author(s):  
Souransu Nandi ◽  
Tarunraj Singh

An adjoint sensitivity-based approach to determine the gradient and Hessian of cost functions for system identification of dynamical systems is presented. The motivation is the development of a computationally efficient approach relative to the direct differentiation (DD) technique and which overcomes the challenges of the step-size selection in finite difference (FD) approaches. An optimization framework is used to determine the parameters of a dynamical system which minimizes a summation of a scalar cost function evaluated at the discrete measurement instants. The discrete time measurements result in discontinuities in the Lagrange multipliers. Two approaches labeled as the Adjoint and the Hybrid are developed for the calculation of the gradient and Hessian for gradient-based optimization algorithms. The proposed approach is illustrated on the Lorenz 63 model where part of the initial conditions and model parameters are estimated using synthetic data. Examples of identifying model parameters of light curves of type 1a supernovae and a two-tank dynamic model using publicly available data are also included.


Sign in / Sign up

Export Citation Format

Share Document