difference of convex
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 42)

H-INDEX

17
(FIVE YEARS 3)

2022 ◽  
Vol 40 ◽  
pp. 1-16
Author(s):  
Fakhrodin Hashemi ◽  
Saeed Ketabchi

Optimal correction of an infeasible equations system as Ax + B|x|= b leads into a non-convex fractional problem. In this paper, a regularization method(ℓp-norm, 0 < p < 1), is presented to solve mentioned fractional problem. In this method, the obtained problem can be formulated as a non-convex and nonsmooth optimization problem which is not Lipschitz. The objective function of this problem can be decomposed as a difference of convex functions (DC). For this reason, we use a special smoothing technique based on DC programming. The numerical results obtained for generated problem show high performance and the effectiveness of the proposed method.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Zhijun Luo ◽  
Zhibin Zhu ◽  
Benxin Zhang

This paper proposes a nonconvex model (called LogTVSCAD) for deblurring images with impulsive noises, using the log-function penalty as the regularizer and adopting the smoothly clipped absolute deviation (SCAD) function as the data-fitting term. The proposed nonconvex model can effectively overcome the poor performance of the classical TVL1 model for high-level impulsive noise. A difference of convex functions algorithm (DCA) is proposed to solve the nonconvex model. For the model subproblem, we consider the alternating direction method of multipliers (ADMM) algorithm to solve it. The global convergence is discussed based on Kurdyka–Lojasiewicz. Experimental results show the advantages of the proposed nonconvex model over existing models.


Author(s):  
Sorin-Mihai Grad ◽  
Felipe Lara

AbstractWe introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Feichao Shen ◽  
Ying Zhang ◽  
Xueyong Wang

In this paper, we propose an accelerated proximal point algorithm for the difference of convex (DC) optimization problem by combining the extrapolation technique with the proximal difference of convex algorithm. By making full use of the special structure of DC decomposition and the information of stepsize, we prove that the proposed algorithm converges at rate of O 1 / k 2 under milder conditions. The given numerical experiments show the superiority of the proposed algorithm to some existing algorithms.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jie Shen ◽  
Na Xu ◽  
Fang-Fang Guo ◽  
Han-Yang Li ◽  
Pan Hu

Abstract For nonlinear nonsmooth DC programming (difference of convex functions), we introduce a new redistributed proximal bundle method. The subgradient information of both the DC components is gathered from some neighbourhood of the current stability center and it is used to build separately an approximation for each component in the DC representation. Especially we employ the nonlinear redistributed technique to model the second component of DC function by constructing a local convexification cutting plane. The corresponding convexification parameter is adjusted dynamically and is taken sufficiently large to make the ”augmented” linearization errors nonnegative. Based on above techniques we obtain a new convex cutting plane model of the original objective function. Based on this new approximation the redistributed proximal bundle method is designed and the convergence of the proposed algorithm to a Clarke stationary point is proved. A simple numerical experiment is given to show the validity of the presented algorithm.


2021 ◽  
Author(s):  
Annabella Astorino ◽  
Massimo Di Francesco ◽  
Manlio Gaudioso ◽  
Enrico Gorgone ◽  
Benedetto Manca

AbstractWe consider polyhedral separation of sets as a possible tool in supervised classification. In particular, we focus on the optimization model introduced by Astorino and Gaudioso (J Optim Theory Appl 112(2):265–293, 2002) and adopt its reformulation in difference of convex (DC) form. We tackle the problem by adapting the algorithm for DC programming known as DCA. We present the results of the implementation of DCA on a number of benchmark classification datasets.


2021 ◽  
pp. 1-36
Author(s):  
Ananda Theertha Suresh ◽  
Brian Roark ◽  
Michael Riley ◽  
Vlad Schogol

Abstract Weighted finite automata (WFA) are often used to represent probabilistic models, such as n-gram language models, since, among other things, they are efficient for recognition tasks in time and space. The probabilistic source to be represented as a WFA, however, may come in many forms. Given a generic probabilistic model over sequences, we propose an algorithm to approximate it as a weighted finite automaton such that the Kullback-Leibler divergence between the source model and the WFA target model is minimized. The proposed algorithm involves a counting step and a difference of convex optimization step, both of which can be performed efficiently.We demonstrate the usefulness of our approach on various tasks, including distilling n-gram models from neural models, building compact language models, and building open-vocabulary character models. The algorithms used for these experiments are available in an open-source software library.


Sign in / Sign up

Export Citation Format

Share Document