Gradient descent observer for on-line battery parameter and state coestimation

Author(s):  
Eiko Kruger ◽  
Franck Al Shakarchi ◽  
Quoc Tuan Tran
Keyword(s):  
2009 ◽  
Vol 77 (2-3) ◽  
pp. 195-224 ◽  
Author(s):  
Chun-Nan Hsu ◽  
Han-Shen Huang ◽  
Yu-Ming Chang ◽  
Yuh-Jye Lee

1998 ◽  
Vol 81 (24) ◽  
pp. 5461-5464 ◽  
Author(s):  
Magnus Rattray ◽  
David Saad ◽  
Shun-ichi Amari

2000 ◽  
Vol 12 (4) ◽  
pp. 881-901 ◽  
Author(s):  
Tom Heskes

Several studies have shown that natural gradient descent for on-line learning is much more efficient than standard gradient descent. In this article, we derive natural gradients in a slightly different manner and discuss implications for batch-mode learning and pruning, linking them to existing algorithms such as Levenberg-Marquardt optimization and optimal brain surgeon. The Fisher matrix plays an important role in all these algorithms. The second half of the article discusses a layered approximation of the Fisher matrix specific to multilayered perceptrons. Using this approximation rather than the exact Fisher matrix, we arrive at much faster “natural” learning algorithms and more robust pruning procedures.


1995 ◽  
Vol 28 (3) ◽  
pp. 643-656 ◽  
Author(s):  
M Biehl ◽  
H Schwarze
Keyword(s):  

Author(s):  
Mark A. McEver ◽  
Daniel G. Cole ◽  
Robert L. Clark

An algorithm is presented which uses adaptive Q-parameterized compensators for control of sound. All stabilizing feedback compensators can be described in terms of plant coprime factors and a free parameter, Q, which can be any stable function. By generating a feedback signal containing only disturbance information, the parameterized compensator allows Q to be designed in an open-loop fashion. The problem of designing Q to yield desired noise reduction is formulated as an on-line gradient descent-based adaptation process. Coefficient update equations are derived for different forms of Q, including digital finite impulse response (FIR) and lattice infinite impulse response (IIR) filters. Simulations predict good performance for both tonal and broadband disturbances, and a duct feedback noise control experiment results in a 37 dB tonal reduction.


2002 ◽  
Vol 14 (7) ◽  
pp. 1723-1738 ◽  
Author(s):  
Nicol N. Schraudolph

We propose a generic method for iteratively approximating various second-order gradient steps—-Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient—-in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.


Sign in / Sign up

Export Citation Format

Share Document