convex program
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 7)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Viet Anh Nguyen ◽  
Soroosh Shafieezadeh-Abadeh ◽  
Daniel Kuhn ◽  
Peyman Mohajerin Esfahani

We introduce a distributionally robust minimium mean square error estimation model with a Wasserstein ambiguity set to recover an unknown signal from a noisy observation. The proposed model can be viewed as a zero-sum game between a statistician choosing an estimator—that is, a measurable function of the observation—and a fictitious adversary choosing a prior—that is, a pair of signal and noise distributions ranging over independent Wasserstein balls—with the goal to minimize and maximize the expected squared estimation error, respectively. We show that, if the Wasserstein balls are centered at normal distributions, then the zero-sum game admits a Nash equilibrium, by which the players’ optimal strategies are given by an affine estimator and a normal prior, respectively. We further prove that this Nash equilibrium can be computed by solving a tractable convex program. Finally, we develop a Frank–Wolfe algorithm that can solve this convex program orders of magnitude faster than state-of-the-art general-purpose solvers. We show that this algorithm enjoys a linear convergence rate and that its direction-finding subproblems can be solved in quasi-closed form.


Author(s):  
Fabian Jaensch ◽  
Peter Jung

Abstract We consider a structured estimation problem where an observed matrix is assumed to be generated as an $s$-sparse linear combination of $N$ given $n\times n$ positive-semi-definite matrices. Recovering the unknown $N$-dimensional and $s$-sparse weights from noisy observations is an important problem in various fields of signal processing and also a relevant preprocessing step in covariance estimation. We will present related recovery guarantees and focus on the case of non-negative weights. The problem is formulated as a convex program and can be solved without further tuning. Such robust, non-Bayesian and parameter-free approaches are important for applications where prior distributions and further model parameters are unknown. Motivated by explicit applications in wireless communication, we will consider the particular rank-one case, where the known matrices are outer products of iid. zero-mean sub-Gaussian $n$-dimensional complex vectors. We show that, for given $n$ and $N$, one can recover non-negative $s$-sparse weights with a parameter-free convex program once $s\leq O(n^2 / \log ^2(N/n^2)$. Our error estimate scales linearly in the instantaneous noise power whereby the convex algorithm does not need prior bounds on the noise. Such estimates are important if the magnitude of the additive distortion depends on the unknown itself.


Author(s):  
Max Klimm ◽  
Philipp Warode

We develop algorithms solving parametric flow problems with separable, continuous, piecewise quadratic, and strictly convex cost functions. The parameter to be considered is a common multiplier on the demand of all nodes. Our algorithms compute a family of flows that are each feasible for the respective demand and minimize the costs among the feasible flows for that demand. For single commodity networks with homogenous cost functions, our algorithm requires one matrix multiplication for the initialization, a rank 1 update for each nondegenerate step and the solution of a convex quadratic program for each degenerate step. For nonhomogeneous cost functions, the initialization requires the solution of a convex quadratic program instead. For multi-commodity networks, both the initialization and every step of the algorithm require the solution of a convex program. As each step is mirrored by a breakpoint in the output this yields output-polynomial algorithms in every case.


2021 ◽  
Author(s):  
Ajinkya Kadu ◽  
Tristan van Leeuwen ◽  
Kees Joost Batenburg

2020 ◽  
Vol 66 (8) ◽  
pp. 3635-3656 ◽  
Author(s):  
Srikanth Jagabathula ◽  
Lakshminarayanan Subramanian ◽  
Ashwin Venkataraman

Mixture models are versatile tools that are used extensively in many fields, including operations, marketing, and econometrics. The main challenge in estimating mixture models is that the mixing distribution is often unknown, and imposing a priori parametric assumptions can lead to model misspecification issues. In this paper, we propose a new methodology for nonparametric estimation of the mixing distribution of a mixture of logit models. We formulate the likelihood-based estimation problem as a constrained convex program and apply the conditional gradient (also known as Frank–Wolfe) algorithm to solve this convex program. We show that our method iteratively generates the support of the mixing distribution and the mixing proportions. Theoretically, we establish the sublinear convergence rate of our estimator and characterize the structure of the recovered mixing distribution. Empirically, we test our approach on real-world datasets. We show that it outperforms the standard expectation-maximization (EM) benchmark on speed (16 times faster), in-sample fit (up to 24% reduction in the log-likelihood loss), and predictive (average 28% reduction in standard error metrics) and decision accuracies (extracts around 23% more revenue). On synthetic data, we show that our estimator is robust to different ground-truth mixing distributions and can also account for endogeneity. This paper was accepted by Serguei Netessine, operations management.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Yi Xu ◽  
Lili Han

In this paper, we focus on a special nonconvex quadratic program whose feasible set is a structured nonconvex set. To find an effective method to solve this nonconvex program, we construct a bilevel program, where the low-level program is a convex program while the upper-level program is a small-scale nonconvex program. Utilizing some properties of the bilevel program, we propose a new algorithm to solve this special quadratic program. Finally, numerical results show that our new method is effective and efficient.


2018 ◽  
Vol 7 (3) ◽  
pp. 563-579
Author(s):  
Paul Hand ◽  
Babhru Joshi

Abstract We introduce a convex approach for mixed linear regression over d features. This approach is a second-order cone program, based on L1 minimization, which assigns an estimate regression coefficient in $\mathbb {R}^{d}$ for each data point. These estimates can then be clustered using, for example, k-means. For problems with two or more mixture classes, we prove that the convex program exactly recovers all of the mixture components in the noiseless setting under technical conditions that include a well-separation assumption on the data. Under these assumptions, recovery is possible if each class has at least d-independent measurements. We also explore an iteratively reweighted least squares implementation of this method on real and synthetic data.


Filomat ◽  
2018 ◽  
Vol 32 (19) ◽  
pp. 6809-6818 ◽  
Author(s):  
Xiao-Bing Li ◽  
Qi-Lin Wang

In this paper, the notion of the radius of robust feasibility is considered for a convex program with general convex and compact uncertainty set. An exact calculating formula for the radius of robust feasibility is given for this uncertain convex program. Moreover, we give a necessary and suficient condition for robust feasibility for uncertain convex programs to be positive. We also give some examples to illustrate our results.


Sign in / Sign up

Export Citation Format

Share Document