Implementation of l1 Magic and One Bit Compressed Sensing Based on Linear Programming Using Excel

Author(s):  
Indukala P.K. ◽  
Lakshmi K. ◽  
Sowmya V. ◽  
Soman K.P.
2020 ◽  
Vol 6 (1) ◽  
Author(s):  
John P. D’Angelo ◽  
Dusty Grundmeier ◽  
Jiri Lebl

Author(s):  
RYUICHI ASHINO ◽  
RÉMI VAILLANCOURT

In correcting a real linear code y = Bx + w by ℓ1 linear programming, where the encoding matrix B ∈ ℝm × n has full rank with m ≥ n and the noise w ∈ ℝm is a sparse random vector, it is numerically observed that the breakdown points of 50% successes in recovering the input vector x ∈ ℝn from the corrupted oversampled measurement y lie on the Donoho–Tanner curves when reflected in their midpoint. The curves of 50% successes in solving underdetermined systems, z = Aw, by ℓ1 linear programming with uniformly distributed compressed sensing matrices A ∈ ℝd × m, where d < m and w is a sparse vector, have been numerically observed and recently shown to coincide with the Donoho–Tanner curves for normally-distributed compressed sensing matrices A derived from geometric combinatorics. When n ≤ m/2, correcting a linear code is faster if done directly by ℓ1 linear programming. However, when n > m/2, to save computing time, this problem can be transformed into an underdetermined compressed sensing problem, Aw = z := Ay, for the syndrome z by a full rank matrix A ∈ ℝd × m, d = m – n, such that AB = 0. For this purpose, to have equivalently high mean breakdown points by ℓ1 linear programming, one can use uniformly distributed random matrices A ∈ ℝ(m-n) × m and matrices B ∈ ℝm × n with orthonormal columns spanning the null space of A. Two exceptional cases have been found. Numerical results are collected in figures and tables.


2013 ◽  
Vol 66 (8) ◽  
pp. 1275-1297 ◽  
Author(s):  
Yaniv Plan ◽  
Roman Vershynin

1997 ◽  
Vol 48 (7) ◽  
pp. 757-758
Author(s):  
B Kolman ◽  
R E Beck ◽  
M J Panik
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document