convex problems
Recently Published Documents


TOTAL DOCUMENTS

167
(FIVE YEARS 48)

H-INDEX

20
(FIVE YEARS 2)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 200
Author(s):  
Hongxia Zheng ◽  
Chiya Zhang ◽  
Yatao Yang ◽  
Xingquan Li ◽  
Chunlong He

We maximize the transmit rate of device-to-device (D2D) in a reconfigurable intelligent surface (RIS) assisted D2D communication system by satisfying the unit-modulus constraints of reflectin elements, the transmit power limit of base station (BS) and the transmitter in a D2D pair. Since it is a non-convex optimization problem, the block coordinate descent (BCD) technique is adopted to decouple this problem into three subproblems. Then, the non-convex subproblems are approximated into convex problems by using successive convex approximation (SCA) and penalty convex-concave procedure (CCP) techniques. Finally, the optimal solution of original problem is obtained by iteratively optimizing the subproblems. Simulation results reveal the validity of the algorithm that we proposed to solve the optimization problem and illustrate the effectiveness of RIS to improve the transmit rate of the D2D pair even with hardware impairments.


Author(s):  
Jingyan Xu ◽  
Frédéric Noo

Abstract We are interested in learning the hyperparameters in a convex objective function in a supervised setting. The complex relationship between the input data to the convex problem and the desirable hyperparameters can be modeled by a neural network; the hyperparameters and the data then drive the convex minimization problem, whose solution is then compared to training labels. In our previous work [1], we evaluated a prototype of this learning strategy in an optimization-based sinogram smoothing plus FBP reconstruction framework. A question arising in this setting is how to efficiently compute (backpropagate) the gradient from the solution of the optimization problem, to the hyperparameters to enable end-to-end training. In this work, we first develop general formulas for gradient backpropagation for a subset of convex problems, namely the proximal mapping. To illustrate the value of the general formulas and to demonstrate how to use them, we consider the specific instance of 1-D quadratic smoothing (denoising) whose solution admits a dynamic programming (DP) algorithm. The general formulas lead to another DP algorithm for exact computation of the gradient of the hyperparameters. Our numerical studies demonstrate a 55%- 65% computation time savings by providing a custom gradient instead of relying on automatic differentiation in deep learning libraries. While our discussion focuses on 1-D quadratic smoothing, our initial results (not presented) support the statement that the general formulas and the computational strategy apply equally well to TV or Huber smoothing problems on simple graphs whose solutions can be computed exactly via DP.


Author(s):  
Gabriele Eichfelder ◽  
Patrick Groetzner

AbstractIn a single-objective setting, nonconvex quadratic problems can equivalently be reformulated as convex problems over the cone of completely positive matrices. In small dimensions this cone equals the cone of matrices which are entrywise nonnegative and positive semidefinite, so the convex reformulation can be solved via SDP solvers. Considering multiobjective nonconvex quadratic problems, naturally the question arises, whether the advantage of convex reformulations extends to the multicriteria framework. In this note, we show that this approach only finds the supported nondominated points, which can already be found by using the weighted sum scalarization of the multiobjective quadratic problem, i.e. it is not suitable for multiobjective nonconvex problems.


Author(s):  
Quoc Tran-Dinh ◽  
Ling Liang ◽  
Kim-Chuan Toh

This paper suggests two novel ideas to develop new proximal variable-metric methods for solving a class of composite convex optimization problems. The first idea is to utilize a new parameterization strategy of the optimality condition to design a class of homotopy proximal variable-metric algorithms that can achieve linear convergence and finite global iteration-complexity bounds. We identify at least three subclasses of convex problems in which our approach can apply to achieve linear convergence rates. The second idea is a new primal-dual-primal framework for implementing proximal Newton methods that has attractive computational features for a subclass of nonsmooth composite convex minimization problems. We specialize the proposed algorithm to solve a covariance estimation problem in order to demonstrate its computational advantages. Numerical experiments on the four concrete applications are given to illustrate the theoretical and computational advances of the new methods compared with other state-of-the-art algorithms.


Author(s):  
Saar Cohen ◽  
Noa Agmon

A network of robots can be viewed as a signal graph, describing the underlying network topology with naturally distributed architectures, whose nodes are assigned to data values associated with each robot. Graph neural networks (GNNs) learn representations from signal graphs, thus making them well-suited candidates for learning distributed controllers. Oftentimes, existing GNN architectures assume ideal scenarios, while ignoring the possibility that this distributed graph may change along time due to link failures or topology variations, which can be found in dynamic settings. A mismatch between the graphs on which GNNs were trained and the ones on which they are tested is thus formed. Utilizing online learning, GNNs can be retrained at testing time, overcoming this issue. However, most online algorithms are centralized and work on convex problems (which GNNs scarcely lead to). This paper introduces novel architectures which solve the convexity restriction and can be easily updated in a distributed, online manner. Finally, we provide experiments, showing how these models can be applied to optimizing formation control in a swarm of flocking robots.


2021 ◽  
Vol 31 (3) ◽  
pp. 2141-2170
Author(s):  
Tatiana Tatarenko ◽  
Angelia Nedich

2021 ◽  
pp. 483-537
Author(s):  
Andreas Antoniou ◽  
Wu-Sheng Lu

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Kai Wang ◽  
Deren Han

<p style='text-indent:20px;'>In this paper, we consider the general first order primal-dual algorithm, which covers several recent popular algorithms such as the one proposed in [Chambolle, A. and Pock T., A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., 40 (2011) 120-145] as a special case. Under suitable conditions, we prove its global convergence and analyze its linear rate of convergence. As compared to the results in the literature, we derive the corresponding results for the general case and under weaker conditions. Furthermore, the global linear rate of the linearized primal-dual algorithm is established in the same analytical framework.</p>


Sign in / Sign up

Export Citation Format

Share Document