unconstrained minimization
Recently Published Documents


TOTAL DOCUMENTS

255
(FIVE YEARS 25)

H-INDEX

27
(FIVE YEARS 2)

Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 259
Author(s):  
Milena J. Petrović ◽  
Dragana Valjarević ◽  
Dejan Ilić ◽  
Aleksandar Valjarević ◽  
Julija Mladenović

We propose an improved variant of the accelerated gradient optimization models for solving unconstrained minimization problems. Merging the positive features of either double direction, as well as double step size accelerated gradient models, we define an iterative method of a simpler form which is generally more effective. Performed convergence analysis shows that the defined iterative method is at least linearly convergent for uniformly convex and strictly convex functions. Numerical test results confirm the efficiency of the developed model regarding the CPU time, the number of iterations and the number of function evaluations metrics.


Author(s):  
Alberto De Marchi

AbstractThis paper introduces QPDO, a primal-dual method for convex quadratic programs which builds upon and weaves together the proximal point algorithm and a damped semismooth Newton method. The outer proximal regularization yields a numerically stable method, and we interpret the proximal operator as the unconstrained minimization of the primal-dual proximal augmented Lagrangian function. This allows the inner Newton scheme to exploit sparse symmetric linear solvers and multi-rank factorization updates. Moreover, the linear systems are always solvable independently from the problem data and exact linesearch can be performed. The proposed method can handle degenerate problems, provides a mechanism for infeasibility detection, and can exploit warm starting, while requiring only convexity. We present details of our open-source C implementation and report on numerical results against state-of-the-art solvers. QPDO proves to be a simple, robust, and efficient numerical method for convex quadratic programming.


Author(s):  
Mohammed Yusuf Waziri ◽  
Kabiru Ahmed ◽  
Abubakar Sani Halilu ◽  
Jamilu Sabiu

Notwithstanding its efficiency and nice attributes, most research on the iterative scheme by Hager and Zhang [Pac. J. Optim. 2(1) (2006) 35-58] are focused on unconstrained minimization problems. Inspired by this and recent works by Waziri et al. [Appl. Math. Comput. 361(2019) 645-660], Sabi’u et al. [Appl. Numer. Math. 153(2020) 217-233], and Sabi’u et al. [Int. J. Comput. Meth, doi:10.1142/S0219876220500437], this paper extends the Hager-Zhang (HZ) approach to nonlinear monotone systems with convex constraint. Two new HZ-type iterative methods are developed by combining the prominent projection method by Solodov and Svaiter [Springer, pp 355-369, 1998] with HZ-type search directions, which are obtained by developing two new parameter choices for the Hager-Zhang scheme. The first choice, is obtained by minimizing the condition number of a modified HZ direction matrix, while the second choice is realized using singular value analysis and minimizing the spectral condition number of the nonsingular HZ search direction matrix. Interesting properties of the schemes include solving non-smooth functions and generating descent directions. Using standard assumptions, the methods’ global convergence are obtained and numerical experiments with recent methods in the literature, indicate that the methods proposed are promising. The schemes effectiveness are further demonstrated by their applications to sparse signal and image reconstruction problems, where they outperform some recent schemes in the literature.


2021 ◽  
Author(s):  
Dimosthenis Pasadakis ◽  
Christie Louis Alappat ◽  
Olaf Schenk ◽  
Gerhard Wellein

AbstractNonlinear reformulations of the spectral clustering method have gained a lot of recent attention due to their increased numerical benefits and their solid mathematical background. We present a novel direct multiway spectral clustering algorithm in the p-norm, for $$p\in (1,2]$$ p ∈ ( 1 , 2 ] . The problem of computing multiple eigenvectors of the graph p-Laplacian, a nonlinear generalization of the standard graph Laplacian, is recasted as an unconstrained minimization problem on a Grassmann manifold. The value of p is reduced in a pseudocontinuous manner, promoting sparser solution vectors that correspond to optimal graph cuts as p approaches one. Monitoring the monotonic decrease of the balanced graph cuts guarantees that we obtain the best available solution from the p-levels considered. We demonstrate the effectiveness and accuracy of our algorithm in various artificial test-cases. Our numerical examples and comparative results with various state-of-the-art clustering methods indicate that the proposed method obtains high quality clusters both in terms of balanced graph cut metrics and in terms of the accuracy of the labelling assignment. Furthermore, we conduct studies for the classification of facial images and handwritten characters to demonstrate the applicability in real-world datasets.


Author(s):  
Yurii Nesterov

AbstractIn this paper, we present a new framework of bi-level unconstrained minimization for development of accelerated methods in Convex Programming. These methods use approximations of the high-order proximal points, which are solutions of some auxiliary parametric optimization problems. For computing these points, we can use different methods, and, in particular, the lower-order schemes. This opens a possibility for the latter methods to overpass traditional limits of the Complexity Theory. As an example, we obtain a new second-order method with the convergence rate $$O\left( k^{-4}\right) $$ O k - 4 , where k is the iteration counter. This rate is better than the maximal possible rate of convergence for this type of methods, as applied to functions with Lipschitz continuous Hessian. We also present new methods with the exact auxiliary search procedure, which have the rate of convergence $$O\left( k^{-(3p+1)/ 2}\right) $$ O k - ( 3 p + 1 ) / 2 , where $$p \ge 1$$ p ≥ 1 is the order of the proximal operator. The auxiliary problem at each iteration of these schemes is convex.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lili Wang ◽  
Hexiang Lv ◽  
Deyun Chen ◽  
Hailu Yang ◽  
Mingyu Li

In the image reconstruction of the electrical capacitance tomography (ECT) system, the application of the total least squares theory transforms the ill-posed problem into a nonlinear unconstrained minimization problem, which avoids calculating the matrix inversion. But in the iterative process of the coefficient matrix, the ill-posed problem is also produced. For the effect on the final image reconstruction accuracy of this problem, combined with the principle of the ECT system, the coefficient matrix is targeted and updated in the overall least squares iteration process. The new coefficient matrix is calculated, and then, the regularization matrix is corrected according to the adaptive targeting singular value, which can reduce the ill-posed effect. In this study, the total least squares iterative method is improved by introducing the mathematical model of EIV to deal with the errors in the measured capacitance data and coefficient matrix. The effect of noise interference on the measurement capacitance data is reduced, and finally, the high-quality reconstructed images are calculated iteratively.


Author(s):  
Mina Yavari ◽  
Alireza Nazemi

In this paper, stabilization of the nonlinear fractional order systems with unknown control coefficients is considered where the dynamic control system depends on the Caputo fractional derivative. Related to the nonlinear fractional control (NFC) system, an infinite-horizon optimal control (OC) problem is first proposed. It is shown that the obtained OC problem can be an asymptotically stabilizing control for the NFC system. Using the help of an approximation, the Caputo derivative is replaced with the integer order derivative. The achieved infinite-horizon OC problem is then converted into an equivalent finite-horizon one. According to the Pontryagin minimum principle for OC problems and by constructing an error function, an unconstrained minimization problem is defined. In the optimization problem, trial solutions are used for state, costate and control functions where these trial solutions are constructed by using a two-layered perceptron neural network. A learning algorithm with convergence properties is also provided. Two numerical results are introduced to explain the main results.


Author(s):  
P. Stetsyuk ◽  
М. Stetsyuk ◽  
D. Bragin ◽  
N. Мolodyk

The paper is devoted to the description of a new approach to the construction of algorithms for solving linear programming problems (LP-problems), in which the number of constraints is much greater than the number of variables. It is based on the use of a modification of the r-algorithm to solve the problem of minimizing a nonsmooth function, which is equivalent to LP problem. The advantages of the approach are demonstrated on the linear robust optimization problem and the robust parameters estimation problem using the least moduli method. The developed octave programs are designed to solve LP problems with a very large number of constraints, for which the use of standard software from linear programming is either impossible or impractical, because it requires significant computing resources. The material of the paper is presented in three sections. In the first section for the problem of minimizing a convex function we describe a modification of the r-algorithm with a constant coefficient of space dilation in the direction of the difference of two successive subgradients and an adaptive method for step size adjustment in the direction of the antisubgradient in the transformed space of variables. The software implementation of this modification is presented in the form of Octave function ralgb5a, which allows to find or approximation of the minimum point of a convex function, or approximation of the maximum point of the concave function. The code of the ralgb5a function is given with a brief description of its input and output parameters. In the second section, a method for solving the LP problem is presented using a nonsmooth penalty function in the form of maximum function and the construction of an auxiliary problem of unconstrained minimization of a convex piecewise linear function. The choice of the finite penalty coefficient ensures equivalence between the LP-problem and the auxiliary problem, and the latter is solved using the ralgb5a program. The results of computational experiments in GNU Octave for solving test LP-problems with the number of constraints from two hundred thousand to fifty million and the number of variables from ten to fifty are presented. The third section presents least moduli method that is robust to abnormal observations or "outliers". The method uses the problem of unconstrained minimization of a convex piecewise linear function, and is solved using the ralgb5a program. The results of computational experiments in GNU Octave for solving test problems with a large number of observations (from two hundred thousand to five million) and a small number of unknown parameters (from ten to one hundred) are presented. They demonstrate the superiority of the developed programs over well-known linear programming software such as the GLPK package. Keywords: robust optimization, linear programming problem, nonsmooth penalty function, r-algorithm, least modulus method, GNU Octave.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Fatemeh Bazikar ◽  
Saeed Ketabchi ◽  
Hossein Moosaei

<p style='text-indent:20px;'>In this paper, we propose a method for solving the twin bounded support vector machine (TBSVM) for the binary classification. To do so, we use the augmented Lagrangian (AL) optimization method and smoothing technique, to obtain new unconstrained smooth minimization problems for TBSVM classifiers. At first, the augmented Lagrangian method is recruited to convert TBSVM into unconstrained minimization programming problems called as AL-TBSVM. We attempt to solve the primal programming problems of AL-TBSVM by converting them into smooth unconstrained minimization problems. Then, the smooth reformulations of AL-TBSVM, which we called AL-STBSVM, are solved by the well-known Newton's algorithm. Finally, experimental results on artificial and several University of California Irvine (UCI) benchmark data sets are provided along with the statistical analysis to show the superior performance of our method in terms of classification accuracy and learning speed.</p>


Sign in / Sign up

Export Citation Format

Share Document