scholarly journals Adaptive thresholding technique for solving optimization problems on attainable sets of (max, min)-linear systems

Kybernetika ◽  
2018 ◽  
pp. 400-412
Author(s):  
Mahmoud Gad
2012 ◽  
Vol 24 (4) ◽  
pp. 1047-1084 ◽  
Author(s):  
Xiao-Tong Yuan ◽  
Shuicheng Yan

We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with nondegenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem, which is useful to model several learning and optimization problems. In this letter, we propose an effective damped Newton method, PLS-DN, to find the exact (up to machine precision) solution of nondegenerate PLSs. PLS-DN exhibits provable semiiterative property, that is, the algorithm converges globally to the exact solution in a finite number of iterations. The rate of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some statistical learning problems such as box-constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008 ), and support vector machines (Cortes & Vapnik, 1995 ). Numerical results on synthetic and benchmark data sets are presented to demonstrate the effectiveness and efficiency of PLS-DN on these problems.


Computing ◽  
2005 ◽  
Vol 75 (1) ◽  
pp. 99-107 ◽  
Author(s):  
A. I. Ovseevich

2007 ◽  
Vol 12 (3) ◽  
pp. 293-306
Author(s):  
E. Akyar

Quasi-linear systems governed by p-integrable controls, for 1 < p < ∞ with constraint ‖u(·)‖p ≤ µ0 are considered. Dependence on initial conditions of attainable sets are studied.


2020 ◽  
Vol 87 (5) ◽  
Author(s):  
Xiaojia Shelly Zhang ◽  
Eric de Sturler ◽  
Alexander Shapiro

Abstract Practical engineering designs typically involve many load cases. For topology optimization with many deterministic load cases, a large number of linear systems of equations must be solved at each optimization step, leading to an enormous computational cost. To address this challenge, we propose a mirror descent stochastic approximation (MD-SA) framework with various step size strategies to solve topology optimization problems with many load cases. We reformulate the deterministic objective function and gradient into stochastic ones through randomization, derive the MD-SA update, and develop algorithmic strategies. The proposed MD-SA algorithm requires only low accuracy in the stochastic gradient and thus uses only a single sample per optimization step (i.e., the sample size is always one). As a result, we reduce the number of linear systems to solve per step from hundreds to one, which drastically reduces the total computational cost, while maintaining a similar design quality. For example, for one of the design problems, the total number of linear systems to solve and wall clock time are reduced by factors of 223 and 22, respectively.


Sign in / Sign up

Export Citation Format

Share Document