scholarly journals Non-smooth Optimization over Stiefel Manifolds with Applications to Dimensionality Reduction and Graph Clustering

Author(s):  
Fariba Zohrizadeh ◽  
Mohsen Kheirandishfard ◽  
Farhad Kamangar ◽  
Ramtin Madani

This paper is concerned with the class of non-convex optimization problems with orthogonality constraints. We develop computationally efficient relaxations that transform non-convex orthogonality constrained problems into polynomial-time solvable surrogates. A novel penalization technique is used to enforce feasibility and derive certain conditions under which the constraints of the original non-convex problem are guaranteed to be satisfied. Moreover, we extend our approach to a feasibility-preserving sequential scheme that solves penalized relaxation to obtain near-globally optimal points. Experimental results on synthetic and real datasets demonstrate the effectiveness of the proposed approach on two practical applications in machine learning.

Author(s):  
Anuj Pal ◽  
Yan Wang ◽  
Ling Zhu ◽  
Guoming George Zhu

Abstract A surrogate assisted optimization approach is an attractive way to reduce the total computational budget for obtaining optimal solutions. This makes it special for its application to practical optimization problems requiring a large number of expensive evaluations. Unfortunately, all practical applications are affected by measurement noises, and not much work has been done to address the issue of handling stochastic problems with multiple objectives and constraints. This work tries to bridge the gap by demonstrating three different frameworks for performing surrogate assisted optimization on multiobjective constrained problems with stochastic measurements. To make the algorithms applicable to real-world problems, heteroscedastic (non-uniform) noise is considered for all frameworks. The proposed algorithms are first validated on several multiobjective numerical problems (unconstrained and constrained) to verify their effectiveness, and then applied to the diesel engine calibration problem, which is expensive to perform and has measurement noises. A GT-SUITE model is used to perform the engine calibration study. Three control parameters, namely variable geometry turbocharger vane position, exhaust-gas-recirculating valve position, and the start of injection, are calibrated to obtain the trade-off between engine fuel efficiency performance (brake specific fuel consumption) and NOx emissions within the constrained design space. The results show that all three proposed extensions can handle the problems well with different measurement noise levels at a reduced evaluation budget. For the engine calibration problem, a good approximation of the optimal region is observed with more than 80\% reduction in evaluation budget for all the proposed methodologies.


2020 ◽  
Vol 28 (4) ◽  
pp. 611-642 ◽  
Author(s):  
Ernö Robert Csetnek

Abstract The aim of this survey is to present the main important techniques and tools from variational analysis used for first and second order dynamical systems of implicit type for solving monotone inclusions and non-smooth optimization problems. The differential equations are expressed by means of the resolvent (in case of a maximally monotone set valued operator) or the proximal operator for non-smooth functions. The asymptotic analysis of the trajectories generated relies on Lyapunov theory, where the appropriate energy functional plays a decisive role. While the most part of the paper is related to monotone inclusions and convex optimization problems in the variational case, we present also results for dynamical systems for solving non-convex optimization problems, where the Kurdyka-Łojasiewicz property is used.


Algorithms ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 8 ◽  
Author(s):  
Angel Alejandro Juan ◽  
Canan Gunes Corlu ◽  
Rafael David Tordecilla ◽  
Rocio de la Torre ◽  
Albert Ferrer

Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Darina Dvinskikh ◽  
Alexander Gasnikov

Abstract We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-20
Author(s):  
Dongsheng Li ◽  
Haodong Liu ◽  
Chao Chen ◽  
Yingying Zhao ◽  
Stephen M. Chu ◽  
...  

In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data. However, the global models are often obtained via a performance tradeoff among users/items, i.e., not all users/items are perfectly fitted by the global models due to the hard non-convex optimization problems in CF algorithms. Ensemble learning can address this issue by learning multiple diverse models but usually suffer from efficiency issue on large datasets or complex algorithms. In this article, we keep the intermediate models obtained during global model learning as the snapshot models, and then adaptively combine the snapshot models for individual user-item pairs using a memory network-based method. Empirical studies on three real-world datasets show that the proposed method can extensively and significantly improve the accuracy (up to 15.9% relatively) when applied to a variety of existing collaborative filtering methods.


Sign in / Sign up

Export Citation Format

Share Document