nonconvex constraint
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 4)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chao Liu ◽  
Chuan Li ◽  
Bo Yang

An iterative convex optimization (ICO) algorithm is proposed to solve pattern synthesis problem under the framework of dual-polarized conformal arrays in this paper. The subproblems of shaping main lobe, optimizing side lobe, and suppressing cross-polarization component are summarized as a joint optimization problem. To solve this problem, the nonconvex constraint about main lobe is rewritten as a convex constraint, which will bring error. And an auxiliary phase function is introduced to correct this error alternatively. Due to the deviation between auxiliary phase and real phase of pattern function, a method minimizing the peak of the synthesis error over observation angles is effectively applied to further improve the performance of the method. Numerical examples show good pattern synthesis ability and convergence performance of the ICO method.


Author(s):  
N. Ghafari ◽  
H. Mohebi

AbstractIn this paper, we study the optimization problem (P) of minimizing a convex function over a constraint set with nonconvex constraint functions. We do this by given new characterizations of Robinson’s constraint qualification, which reduces to the combination of generalized Slater’s condition and generalized sharpened nondegeneracy condition for nonconvex programming problems with nearly convex feasible sets at a reference point. Next, using a version of the strong CHIP, we present a constraint qualification which is necessary for optimality of the problem (P). Finally, using new characterizations of Robinson’s constraint qualification, we give necessary and sufficient conditions for optimality of the problem (P).


Filomat ◽  
2020 ◽  
Vol 34 (14) ◽  
pp. 4669-4684
Author(s):  
H. Mohebi

In this paper, we consider the constraint set K := {x ? Rn : gj(x)? 0,? j = 1,2,...,m} of inequalities with nonsmooth nonconvex constraint functions gj : Rn ? R (j = 1,2,...,m).We show that under Abadie?s constraint qualification the ?perturbation property? of the best approximation to any x in Rn from a convex set ?K := C ? K is characterized by the strong conical hull intersection property (strong CHIP) of C and K, where C is an arbitrary non-empty closed convex subset of Rn: By using the idea of tangential subdifferential and a non-smooth version of Abadie?s constraint qualification, we do this by first proving a dual cone characterization of the constraint set K. Moreover, we present sufficient conditions for which the strong CHIP property holds. In particular, when the set ?K is closed and convex, we show that the Lagrange multiplier characterizations of constrained best approximation holds under a non-smooth version of Abadie?s constraint qualification. The obtained results extend many corresponding results in the context of constrained best approximation. Several examples are provided to clarify the results.


Author(s):  
Hongchang Gao ◽  
Heng Huang

Sparse learning models have shown promising performance in the high dimensional machine learning applications. The main challenge of sparse learning models is how to optimize it efficiently. Most existing methods solve this problem by relaxing it as a convex problem, incurring large estimation bias. Thus, the sparse learning model with nonconvex constraint has attracted much attention due to its better performance. But it is difficult to optimize due to the non-convexity. In this paper, we propose a linearly convergent stochastic second-order method to optimize this nonconvex problem for large-scale datasets. The proposed method incorporates second-order information to improve the convergence speed. Theoretical analysis shows that our proposed method enjoys linear convergence rate and guarantees to converge to the underlying true model parameter. Experimental results have verified the efficiency and correctness of our proposed method.


2017 ◽  
Vol 29 (5) ◽  
pp. 1406-1438 ◽  
Author(s):  
Shuhei Fujiwara ◽  
Akiko Takeda ◽  
Takafumi Kanamori

Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.


2004 ◽  
Vol 43 (2) ◽  
pp. 466-476 ◽  
Author(s):  
F. S. De Blasi ◽  
G. Pianigiani ◽  
A. A. Tolstonogov

Sign in / Sign up

Export Citation Format

Share Document