Gradient Descent Converges to Minimizers: Optimal and Adaptive Step-Size Rules

Author(s):  
Bin Shi ◽  
S. S. Iyengar
2016 ◽  
Vol 26 (04) ◽  
pp. 1650056
Author(s):  
Auni Aslah Mat Daud

In this paper, we present the application of the gradient descent of indeterminism (GDI) shadowing filter to a chaotic system, that is the ski-slope model. The paper focuses on the quality of the estimated states and their usability for forecasting. One main problem is that the existing GDI shadowing filter fails to provide stability to the convergence of the root mean square error and the last point error of the ski-slope model. Furthermore, there are unexpected cases in which the better state estimates give worse forecasts than the worse state estimates. We investigate these unexpected cases in particular and show how the presence of the humps contributes to them. However, the results show that the GDI shadowing filter can successfully be applied to the ski-slope model with only slight modification, that is, by introducing the adaptive step-size to ensure the convergence of indeterminism. We investigate its advantages over fixed step-size and how it can improve the performance of our shadowing filter.


2012 ◽  
Vol 16 (S3) ◽  
pp. 355-375 ◽  
Author(s):  
Olena Kostyshyna

An adaptive step-size algorithm [Kushner and Yin,Stochastic Approximation and Recursive Algorithms and Applications, 2nd ed., New York: Springer-Verlag (2003)] is used to model time-varying learning, and its performance is illustrated in the environment of Marcet and Nicolini [American Economic Review93 (2003), 1476–1498]. The resulting model gives qualitatively similar results to those of Marcet and Nicolini, and performs quantitatively somewhat better, based on the criterion of mean squared error. The model generates increasing gain during hyperinflations and decreasing gain after hyperinflations end, which matches findings in the data. An agent using this model behaves cautiously when faced with sudden changes in policy, and is able to recognize a regime change after acquiring sufficient information.


Author(s):  
Shuo Peng ◽  
A.-J. Ouyang ◽  
Jeff Jun Zhang

With regards to the low search accuracy of the basic invasive weed optimization algorithm which is easy to get into local extremum, this paper proposes an adaptive invasive weed optimization (AIWO) algorithm. The algorithm sets the initial step size and the final step size as the adaptive step size to guide the global search of the algorithm, and it is applied to 20 famous benchmark functions for a test, the results of which show that the AIWO algorithm owns better global optimization search capacity, faster convergence speed and higher computation accuracy compared with other advanced algorithms.


Sign in / Sign up

Export Citation Format

Share Document