scholarly journals New inertial proximal gradient methods for unconstrained convex optimization problems

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Peichao Duan ◽  
Yiqun Zhang ◽  
Qinxiong Bu

AbstractThe proximal gradient method is a highly powerful tool for solving the composite convex optimization problem. In this paper, firstly, we propose inexact inertial acceleration methods based on the viscosity approximation and proximal scaled gradient algorithm to accelerate the convergence of the algorithm. Under reasonable parameters, we prove that our algorithms strongly converge to some solution of the problem, which is the unique solution of a variational inequality problem. Secondly, we propose an inexact alternated inertial proximal point algorithm. Under suitable conditions, the weak convergence theorem is proved. Finally, numerical results illustrate the performances of our algorithms and present a comparison with related algorithms. Our results improve and extend the corresponding results reported by many authors recently.

2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Yaping Hu

We propose an extended multivariate spectral gradient algorithm to solve the nonsmooth convex optimization problem. First, by using Moreau-Yosida regularization, we convert the original objective function to a continuously differentiable function; then we use approximate function and gradient values of the Moreau-Yosida regularization to substitute the corresponding exact values in the algorithm. The global convergence is proved under suitable assumptions. Numerical experiments are presented to show the effectiveness of this algorithm.


2021 ◽  
Vol 78 (3) ◽  
pp. 705-740
Author(s):  
Caroline Geiersbach ◽  
Teresa Scarinci

AbstractFor finite-dimensional problems, stochastic approximation methods have long been used to solve stochastic optimization problems. Their application to infinite-dimensional problems is less understood, particularly for nonconvex objectives. This paper presents convergence results for the stochastic proximal gradient method applied to Hilbert spaces, motivated by optimization problems with partial differential equation (PDE) constraints with random inputs and coefficients. We study stochastic algorithms for nonconvex and nonsmooth problems, where the nonsmooth part is convex and the nonconvex part is the expectation, which is assumed to have a Lipschitz continuous gradient. The optimization variable is an element of a Hilbert space. We show almost sure convergence of strong limit points of the random sequence generated by the algorithm to stationary points. We demonstrate the stochastic proximal gradient algorithm on a tracking-type functional with a $$L^1$$ L 1 -penalty term constrained by a semilinear PDE and box constraints, where input terms and coefficients are subject to uncertainty. We verify conditions for ensuring convergence of the algorithm and show a simulation.


2020 ◽  
Vol 8 (2) ◽  
pp. 403-413
Author(s):  
Yaping Hu ◽  
Liying Liu ◽  
Yujie Wang

This paper presents a Wei-Yao-Liu conjugate gradient algorithm for nonsmooth convex optimization problem. The proposed algorithm makes use of approximate function and gradient values of the Moreau-Yosida regularization function instead of the corresponding exact values.  Under suitable conditions, the global convergence property could be established for the proposed conjugate gradient  method. Finally, some numerical results are reported to show the efficiency of our algorithm.


Author(s):  
Myriam Verschuure ◽  
Bram Demeulenaere ◽  
Jan Swevers ◽  
Joris De Schutter

This paper focusses on reducing, through counterweight addition, the vibration of an elastically mounted, rigid machine frame that supports a linkage. In order to determine the counterweights that yield a maximal reduction in frame vibration, a non-linear optimization problem is formulated with the frame kinetic energy as objective function and such that a convex optimization problem is obtained. Convex optimization problems are nonlinear optimization problems that have a unique (global) optimum, which can be found with great efficiency. The proposed methodology is successfully applied to improve the results of the benchmark four-bar problem, first considered by Kochev and Gurdev. For this example, the balancing is shown to be very robust for drive speed variations and to benefit only marginally from using a coupler counterweight.


MATEMATIKA ◽  
2018 ◽  
Vol 34 (2) ◽  
pp. 381-392
Author(s):  
Lee Chang Kerk ◽  
Rohanin Ahmad

Optimization is central to any problem involving decision making. Thearea of optimization has received enormous attention for over 30 years and it is still popular in research field to this day. In this paper, a global optimization method called Kerk and Rohanin’s Trusted Interval will be introduced. The method introduced is able to identify all local solutions by converting non-convex optimization problems into piece-wise convex optimization problems. A mechanism which only considers the convex part where minimizers existed on a function is applied. This mechanism allows the method to filter out concave parts and some unrelated parts automatically. The identified convex parts are called trusted intervals. The descent property and the globally convergent of the method was shown in this paper. 15 test problems have been used to show the ability of the algorithm proposed in locating global minimizer.


2009 ◽  
Vol 19 (2) ◽  
pp. 239-248 ◽  
Author(s):  
Goran Lesaja ◽  
Verlynda Slaughter

In this paper we consider interior-point methods (IPM) for the nonlinear, convex optimization problem where the objective function is a weighted sum of reciprocals of variables subject to linear constraints (SOR). This problem appears often in various applications such as statistical stratified sampling and entropy problems, to mention just few examples. The SOR is solved using two IPMs. First, a homogeneous IPM is used to solve the Karush-Kuhn-Tucker conditions of the problem which is a standard approach. Second, a homogeneous conic quadratic IPM is used to solve the SOR as a reformulated conic quadratic problem. As far as we are aware of it, this is a novel approach not yet considered in the literature. The two approaches are then numerically tested on a set of randomly generated problems using optimization software MOSEK. They are compared by CPU time and the number of iterations, showing that the second approach works better for problems with higher dimensions. The main reason is that although the first approach increases the number of variables, the IPM exploits the structure of the conic quadratic reformulation much better than the structure of the original problem.


Sign in / Sign up

Export Citation Format

Share Document