SVR-Primal Dual Method of Multipliers (PDMM) for Large-Scale Problems

Author(s):  
Lijanshu Sinha ◽  
Ketan Rajawat ◽  
Chirag Kumar
Energy ◽  
2020 ◽  
Vol 208 ◽  
pp. 118306 ◽  
Author(s):  
Mohamed A. Mohamed ◽  
Tao Jin ◽  
Wencong Su

Author(s):  
Minh N. Bùi ◽  
Patrick L. Combettes

We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. The weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented.


Author(s):  
Vincent M. Tavakoli ◽  
Jesper R. Jensen ◽  
Richard Heusdens ◽  
Jacob Benesty ◽  
Mads G. Christensen

2020 ◽  
Vol 85 (2) ◽  
Author(s):  
Radu Ioan Boţ ◽  
Axel Böhm

AbstractWe aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal–dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with the linear operator we can derive novel algorithms through regularization via the Moreau envelope. Furthermore, we tackle large scale problems by means of stochastic oracle calls, very similar to stochastic gradient techniques. Applications to total variational denoising and deblurring, and matrix factorization are provided.


2013 ◽  
Vol 16 (08) ◽  
pp. 1350042 ◽  
Author(s):  
PIERRE HENRY-LABORDÈRE

In this paper, we investigate model-independent bounds for option prices given a set of market instruments. This super-replication problem can be written as a semi-infinite linear programing problem. As these super-replication prices can be large and the densities ℚ which achieve the upper bounds quite singular, we restrict ℚ to be close in the entropy sense to a prior probability measure at a next stage. This leads to our risk-neutral weighted Monte Carlo approach which is connected to a constrained convex problem. We explain how to solve efficiently these large-scale problems using a primal-dual interior-point algorithm within the cutting-plane method and a quasi-Newton algorithm. Various examples illustrate the efficiency of these algorithms and the large range of applicability.


2021 ◽  
Author(s):  
◽  
Matthew O'Connor

<p>With ever growing sources of digital data and the reductions in cost of small-scale wireless processing nodes, equipped with various sensors, microprocessors, and communication systems, we are seeing an increasing need for efficient distributed processing algorithms and techniques. This thesis focuses on the Primal-Dual Method of Multipliers (PDMM) as it applies to wireless sensor networks, and develops new algorithms based on PDMM more appropriate for the limitations on processing power, battery life, and memory that these devices suffer from. We develop FS-PDMM and QA-PDMM that greatly improve the efficiency of local node computations when dealing with regularized optimization problems and smooth cost function optimization problems, respectively. We combine these approaches to form the FSQA-PDMM algorithm that may be applied to problems with smooth cost functions and non-smooth regularization functions. Additionally, these three methods often eliminate the need for numerical optimization packages, reducing the memory cost on our nodes. We present the FT-PDMM algorithm for finite-time convergence of quadratic consensus problems, reducing the number of in-network iterations required for network convergence. Finally, we present two signal processing applications that benefit from our theoretical work: a distributed sparse near-field acoustic beamformer; and a distributed image fusion algorithm for use in imaging arrays. Simulated experiments confirm the benefit of our approaches, and demonstrate the computational gains to be made by tailoring our techniques towards sensor networks.</p>


Sign in / Sign up

Export Citation Format

Share Document