On Converse Duality for Nonsmooth Optimization Problem

1993 ◽  
Vol 14 (2) ◽  
pp. 149-153
Author(s):  
Gue Myung Lee ◽  
Do Sang Kim
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Jia-Tong Li ◽  
Jie Shen ◽  
Na Xu

For CVaR (conditional value-at-risk) portfolio nonsmooth optimization problem, we propose an infeasible incremental bundle method on the basis of the improvement function and the main idea of incremental method for solving convex finite min-max problems. The presented algorithm only employs the information of the objective function and one component function of constraint functions to form the approximate model for improvement function. By introducing the aggregate technique, we keep the information of previous iterate points that may be deleted from bundle to overcome the difficulty of numerical computation and storage. Our algorithm does not enforce the feasibility of iterate points and the monotonicity of objective function, and the global convergence of the algorithm is established under mild conditions. Compared with the available results, our method loosens the requirements of computing the whole constraint function, which makes the algorithm easier to implement.


2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Miao Chen ◽  
Shou-qiang Du

We study the method for solving a kind of nonsmooth optimization problems with l1-norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.


Symmetry ◽  
2019 ◽  
Vol 11 (11) ◽  
pp. 1348 ◽  
Author(s):  
Ramu Dubey ◽  
Lakshmi Narayan Mishra ◽  
Luis Manuel Sánchez Ruiz

In this article, a pair of nondifferentiable second-order symmetric fractional primal-dual model (G-Mond–Weir type model) in vector optimization problem is formulated over arbitrary cones. In addition, we construct a nontrivial numerical example, which helps to understand the existence of such type of functions. Finally, we prove weak, strong and converse duality theorems under aforesaid assumptions.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Longquan Yong

The method of least absolute deviation (LAD) finds applications in many areas, due to its robustness compared to the least squares regression (LSR) method. LAD is robust in that it is resistant to outliers in the data. This may be helpful in studies where outliers may be ignored. Since LAD is nonsmooth optimization problem, this paper proposed a metaheuristics algorithm named novel global harmony search (NGHS) for solving. Numerical results show that the NGHS method has good convergence property and effective in solving LAD.


2022 ◽  
Vol 40 ◽  
pp. 1-16
Author(s):  
Fakhrodin Hashemi ◽  
Saeed Ketabchi

Optimal correction of an infeasible equations system as Ax + B|x|= b leads into a non-convex fractional problem. In this paper, a regularization method(ℓp-norm, 0 < p < 1), is presented to solve mentioned fractional problem. In this method, the obtained problem can be formulated as a non-convex and nonsmooth optimization problem which is not Lipschitz. The objective function of this problem can be decomposed as a difference of convex functions (DC). For this reason, we use a special smoothing technique based on DC programming. The numerical results obtained for generated problem show high performance and the effectiveness of the proposed method.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Jie Shen ◽  
Li-Ping Pang ◽  
Dan Li

An implementable algorithm for solving a nonsmooth convex optimization problem is proposed by combining Moreau-Yosida regularization and bundle and quasi-Newton ideas. In contrast with quasi-Newton bundle methods of Mifflin et al. (1998), we only assume that the values of the objective function and its subgradients are evaluated approximately, which makes the method easier to implement. Under some reasonable assumptions, the proposed method is shown to have a Q-superlinear rate of convergence.


Sign in / Sign up

Export Citation Format

Share Document