constrained nonlinear optimization
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 22)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
S. Lämmel ◽  
V. Shikhman

AbstractWe study sparsity constrained nonlinear optimization (SCNO) from a topological point of view. Special focus will be on M-stationary points from Burdakov et al. (SIAM J Optim 26:397–425, 2016), also introduced as $$N^C$$ N C -stationary points in Pan et al. (J Oper Res Soc China 3:421–439, 2015). We introduce nondegenerate M-stationary points and define their M-index. We show that all M-stationary points are generically nondegenerate. In particular, the sparsity constraint is active at all local minimizers of a generic SCNO. Some relations to other stationarity concepts, such as S-stationarity, basic feasibility, and CW-minimality, are discussed in detail. By doing so, the issues of instability and degeneracy of points due to different stationarity concepts are highlighted. The concept of M-stationarity allows to adequately describe the global structure of SCNO along the lines of Morse theory. For that, we study topological changes of lower level sets while passing an M-stationary point. As novelty for SCNO, multiple cells of dimension equal to the M-index are needed to be attached. This intriguing fact is in strong contrast with other optimization problems considered before, where just one cell suffices. As a consequence, we derive a Morse relation for SCNO, which relates the numbers of local minimizers and M-stationary points of M-index equal to one. The appearance of such saddle points cannot be thus neglected from the perspective of global optimization. Due to the multiplicity phenomenon in cell-attachment, a saddle point may lead to more than two different local minimizers. We conclude that the relatively involved structure of saddle points is the source of well-known difficulty if solving SCNO to global optimality.


Author(s):  
David Ek ◽  
Anders Forsgren

AbstractThe focus in this paper is interior-point methods for bound-constrained nonlinear optimization, where the system of nonlinear equations that arise are solved with Newton’s method. There is a trade-off between solving Newton systems directly, which give high quality solutions, and solving many approximate Newton systems which are computationally less expensive but give lower quality solutions. We propose partial and full approximate solutions to the Newton systems. The specific approximate solution depends on estimates of the active and inactive constraints at the solution. These sets are at each iteration estimated by basic heuristics. The partial approximate solutions are computationally inexpensive, whereas a system of linear equations needs to be solved for the full approximate solution. The size of the system is determined by the estimate of the inactive constraints at the solution. In addition, we motivate and suggest two Newton-like approaches which are based on an intermediate step that consists of the partial approximate solutions. The theoretical setting is introduced and asymptotic error bounds are given. We also give numerical results to investigate the performance of the approximate solutions within and beyond the theoretical framework.


2020 ◽  
Vol 5 (54) ◽  
pp. 2564 ◽  
Author(s):  
Neil Wu ◽  
Gaetan Kenway ◽  
Charles Mader ◽  
John Jasa ◽  
Joaquim Martins

Sign in / Sign up

Export Citation Format

Share Document