Computable bounds on parametric solutions of convex problems

1988 ◽  
Vol 40-40 (1-3) ◽  
pp. 213-221 ◽  
Author(s):  
Anthony V. Fiacco ◽  
Jerzy Kyparisis
2019 ◽  
Vol 19 (2) ◽  
pp. 391-412
Author(s):  
Uriel Kaufmann ◽  
Humberto Ramos Quoirin ◽  
Kenichiro Umezu

AbstractWe establish the existence of loop type subcontinua of nonnegative solutions for a class of concave-convex type elliptic equations with indefinite weights, under Dirichlet and Neumann boundary conditions. Our approach depends on local and global bifurcation analysis from the zero solution in a nonregular setting, since the nonlinearities considered are not differentiable at zero, so that the standard bifurcation theory does not apply. To overcome this difficulty, we combine a regularization scheme with a priori bounds, and Whyburn’s topological method. Furthermore, via a continuity argument we prove a positivity property for subcontinua of nonnegative solutions. These results are based on a positivity theorem for the associated concave problem proved by us, and extend previous results established in the powerlike case.


Author(s):  
Gabriele Eichfelder ◽  
Patrick Groetzner

AbstractIn a single-objective setting, nonconvex quadratic problems can equivalently be reformulated as convex problems over the cone of completely positive matrices. In small dimensions this cone equals the cone of matrices which are entrywise nonnegative and positive semidefinite, so the convex reformulation can be solved via SDP solvers. Considering multiobjective nonconvex quadratic problems, naturally the question arises, whether the advantage of convex reformulations extends to the multicriteria framework. In this note, we show that this approach only finds the supported nondominated points, which can already be found by using the weighted sum scalarization of the multiobjective quadratic problem, i.e. it is not suitable for multiobjective nonconvex problems.


2020 ◽  
Author(s):  
Qing Tao

The extrapolation strategy raised by Nesterov, which can accelerate the convergence rate of gradient descent methods by orders of magnitude when dealing with smooth convex objective, has led to tremendous success in training machine learning tasks. In this paper, we theoretically study its strength in the convergence of individual iterates of general non-smooth convex optimization problems, which we name \textit{individual convergence}. We prove that Nesterov's extrapolation is capable of making the individual convergence of projected gradient methods optimal for general convex problems, which is now a challenging problem in the machine learning community. In light of this consideration, a simple modification of the gradient operation suffices to achieve optimal individual convergence for strongly convex problems, which can be regarded as making an interesting step towards the open question about SGD posed by Shamir \cite{shamir2012open}. Furthermore, the derived algorithms are extended to solve regularized non-smooth learning problems in stochastic settings. {\color{blue}They can serve as an alternative to the most basic SGD especially in coping with machine learning problems, where an individual output is needed to guarantee the regularization structure while keeping an optimal rate of convergence.} Typically, our method is applicable as an efficient tool for solving large-scale $l_1$-regularized hinge-loss learning problems. Several real experiments demonstrate that the derived algorithms not only achieve optimal individual convergence rates but also guarantee better sparsity than the averaged solution.


2021 ◽  
Vol 31 (3) ◽  
pp. 2141-2170
Author(s):  
Tatiana Tatarenko ◽  
Angelia Nedich

Sign in / Sign up

Export Citation Format

Share Document