additive distortion
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 1)

Algorithmica ◽  
2021 ◽  
Author(s):  
Fedor V. Fomin ◽  
Petr A. Golovach ◽  
William Lochet ◽  
Pranabendu Misra ◽  
Saket Saurabh ◽  
...  

AbstractWe initiate the parameterized complexity study of minimum t-spanner problems on directed graphs. For a positive integer t, a multiplicative t-spanner of a (directed) graph G is a spanning subgraph H such that the distance between any two vertices in H is at most t times the distance between these vertices in G, that is, H keeps the distances in G up to the distortion (or stretch) factor t. An additive t-spanner is defined as a spanning subgraph that keeps the distances up to the additive distortion parameter t, that is, the distances in H and G differ by at most t. The task of Directed Multiplicative Spanner is, given a directed graph G with m arcs and positive integers t and k, decide whether G has a multiplicative t-spanner with at most $$m-k$$ m - k arcs. Similarly, Directed Additive Spanner asks whether G has an additive t-spanner with at most $$m-k$$ m - k arcs. We show that (i) Directed Multiplicative Spanner admits a polynomial kernel of size $$\mathcal {O}(k^4t^5)$$ O ( k 4 t 5 ) and can be solved in randomized $$(4t)^k\cdot n^{\mathcal {O}(1)}$$ ( 4 t ) k · n O ( 1 ) time, (ii) the weighted variant of Directed Multiplicative Spanner can be solved in $$k^{2k}\cdot n^{\mathcal {O}(1)}$$ k 2 k · n O ( 1 ) time on directed acyclic graphs, (iii) Directed Additive Spanner is $${{\,\mathrm{\mathsf{W}}\,}}[1]$$ W [ 1 ] -hard when parameterized by k for every fixed $$t\ge 1$$ t ≥ 1 even when the input graphs are restricted to be directed acyclic graphs. The latter claim contrasts with the recent result of Kobayashi from STACS 2020 that the problem for undirected graphs is $${{\,\mathrm{\mathsf{FPT}}\,}}$$ FPT when parameterized by t and k.


Author(s):  
Fabian Jaensch ◽  
Peter Jung

Abstract We consider a structured estimation problem where an observed matrix is assumed to be generated as an $s$-sparse linear combination of $N$ given $n\times n$ positive-semi-definite matrices. Recovering the unknown $N$-dimensional and $s$-sparse weights from noisy observations is an important problem in various fields of signal processing and also a relevant preprocessing step in covariance estimation. We will present related recovery guarantees and focus on the case of non-negative weights. The problem is formulated as a convex program and can be solved without further tuning. Such robust, non-Bayesian and parameter-free approaches are important for applications where prior distributions and further model parameters are unknown. Motivated by explicit applications in wireless communication, we will consider the particular rank-one case, where the known matrices are outer products of iid. zero-mean sub-Gaussian $n$-dimensional complex vectors. We show that, for given $n$ and $N$, one can recover non-negative $s$-sparse weights with a parameter-free convex program once $s\leq O(n^2 / \log ^2(N/n^2)$. Our error estimate scales linearly in the instantaneous noise power whereby the convex algorithm does not need prior bounds on the noise. Such estimates are important if the magnitude of the additive distortion depends on the unknown itself.


Sign in / Sign up

Export Citation Format

Share Document