scholarly journals A Transformation of Accelerated Double Step Size Method for Unconstrained Optimization

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Predrag S. Stanimirović ◽  
Gradimir V. Milovanović ◽  
Milena J. Petrović ◽  
Nataša Z. Kontrec

A reduction of the originally double step size iteration into the single step length scheme is derived under the proposed condition that relates two step lengths in the accelerated double step size gradient descent scheme. The proposed transformation is numerically tested. Obtained results confirm the substantial progress in comparison with the single step size accelerated gradient descent method defined in a classical way regarding all analyzed characteristics: number of iterations, CPU time, and number of function evaluations. Linear convergence of derived method has been proved.

2018 ◽  
Vol 98 (2) ◽  
pp. 331-338 ◽  
Author(s):  
STEFAN PANIĆ ◽  
MILENA J. PETROVIĆ ◽  
MIROSLAVA MIHAJLOV CAREVIĆ

We improve the convergence properties of the iterative scheme for solving unconstrained optimisation problems introduced in Petrovic et al. [‘Hybridization of accelerated gradient descent method’, Numer. Algorithms (2017), doi:10.1007/s11075-017-0460-4] by optimising the value of the initial step length parameter in the backtracking line search procedure. We prove the validity of the algorithm and illustrate its advantages by numerical experiments and comparisons.


Symmetry ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1512
Author(s):  
Kai Xu ◽  
Zhi Xiong

Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.


2018 ◽  
Vol 10 (03) ◽  
pp. 1850004
Author(s):  
Grant Sheen

Wireless recording and real time classification of brain waves are essential steps towards future wearable devices to assist Alzheimer’s patients in conveying their thoughts. This work is concerned with efficient computation of a dimension-reduced neural network (NN) model on Alzheimer’s patient data recorded by a wireless headset. Due to much fewer sensors in wireless recording than the number of electrodes in a traditional wired cap and shorter attention span of an Alzheimer’s patient than a normal person, the data is much more restrictive than is typical in neural robotics and mind-controlled games. To overcome this challenge, an alternating minimization (AM) method is developed for network training. AM minimizes a nonsmooth and nonconvex objective function one variable at a time while fixing the rest. The sub-problem for each variable is piecewise convex with a finite number of minima. The overall iterative AM method is descending and free of step size (learning parameter) in the standard gradient descent method. The proposed model, trained by the AM method, significantly outperforms the standard NN model trained by the stochastic gradient descent method in classifying four daily thoughts, reaching accuracies around 90% for Alzheimer’s patient. Curved decision boundaries of the proposed model with multiple hidden neurons are found analytically to establish the nonlinear nature of the classification.


2017 ◽  
Vol 79 (3) ◽  
pp. 769-786 ◽  
Author(s):  
Milena Petrović ◽  
Vladimir Rakočević ◽  
Nataša Kontrec ◽  
Stefan Panić ◽  
Dejan Ilić

Filomat ◽  
2009 ◽  
Vol 23 (3) ◽  
pp. 23-36 ◽  
Author(s):  
Predrag Stanimirovic ◽  
Marko Miladinovic ◽  
Snezana Djordjevic

We introduced an algorithm for unconstrained optimization based on the reduction of the modified Newton method with line search into a gradient descent method. Main idea used in the algorithm construction is approximation of Hessian by a diagonal matrix. The step length calculation algorithm is based on the Taylor's development in two successive iterative points and the backtracking line search procedure.


Sign in / Sign up

Export Citation Format

Share Document