Online Prediction Problems with Variation

Author(s):  
Chia-Jung Lee ◽  
Shi-Chun Tsai ◽  
Ming-Chuan Yang
2014 ◽  
Vol 7 (1) ◽  
pp. 107
Author(s):  
Ilyes Elaissi ◽  
Okba Taouali ◽  
Messaoud Hassani

1970 ◽  
Vol 1 (3) ◽  
pp. 181-205 ◽  
Author(s):  
ERIK ERIKSSON

The term “stochastic hydrology” implies a statistical approach to hydrologic problems as opposed to classic hydrology which can be considered deterministic in its approach. During the International Hydrology Symposium, held 6-8 September 1967 at Fort Collins, a number of hydrology papers were presented consisting to a large extent of studies on long records of hydrological elements such as river run-off, these being treated as time series in the statistical sense. This approach is, no doubt, of importance for future work especially in relation to prediction problems, and there seems to be no fundamental difficulty for introducing the stochastic concepts into various hydrologic models. There is, however, some developmental work required – not to speak of educational in respect to hydrologists – before the full benefit of the technique is obtained. The present paper is to some extent an exercise in the statistical study of hydrological time series – far from complete – and to some extent an effort to interpret certain features of such time series from a physical point of view. The material used is 30 years of groundwater level observations in an esker south of Uppsala, the observations being discussed recently by Hallgren & Sands-borg (1968).


Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


Sign in / Sign up

Export Citation Format

Share Document