scholarly journals Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning

2009 ◽  
Vol 77 (2-3) ◽  
pp. 195-224 ◽  
Author(s):  
Chun-Nan Hsu ◽  
Han-Shen Huang ◽  
Yu-Ming Chang ◽  
Yuh-Jye Lee
Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


2002 ◽  
Vol 14 (7) ◽  
pp. 1723-1738 ◽  
Author(s):  
Nicol N. Schraudolph

We propose a generic method for iteratively approximating various second-order gradient steps—-Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient—-in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for on-line learning, matrix momentum and stochastic meta-descent (SMD), implement this approach. Since both were originally derived by very different routes, this offers fresh insight into their operation, resulting in further improvements to SMD.


2016 ◽  
Vol 291 ◽  
pp. 39-51 ◽  
Author(s):  
Higinio Ramos ◽  
Gurjinder Singh ◽  
V. Kanwar ◽  
Saurabh Bhatia

2014 ◽  
Vol 12 (7) ◽  
pp. 3689-3696 ◽  
Author(s):  
Khosrow Amirizadeh ◽  
Rajeswari Mandava

Accelerated multi-armed bandit (MAB) model in Reinforcement-Learning for on-line sequential selection problems is presented. This iterative model utilizes an automatic step size calculation that improves the performance of MAB algorithm under different conditions such as, variable variance of reward and larger set of usable actions. As result of these modifications, number of optimal selections will be maximized and stability of the algorithm under mentioned conditions may be amplified. This adaptive model with automatic step size computation may attractive for on-line applications in which,  variance of observations vary with time and re-tuning their step size are unavoidable where, this re-tuning is not a simple task. The proposed model governed by upper confidence bound (UCB) approach in iterative form with automatic step size computation. It called adaptive UCB (AUCB) that may use in industrial robotics, autonomous control and intelligent selection or prediction tasks in the economical engineering applications under lack of information.


Author(s):  
Lei Zhang ◽  
Chaofeng Zhang ◽  
Mengya Liu

According to the relationship between truncation error and step size of two implicit second-order-derivative multistep formulas based on Hermite interpolation polynomial, a variable-order and variable-step-size numerical method for solving differential equations is designed. The stability properties of the formulas are discussed and the stability regions are analyzed. The deduced methods are applied to a simulation problem. The results show that the numerical method can satisfy calculation accuracy, reduce the number of calculation steps and accelerate calculation speed.


Sign in / Sign up

Export Citation Format

Share Document