scholarly journals Linear Program Relaxation of Sparse Nonnegative Recovery in Compressive Sensing Microarrays

2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Linxia Qin ◽  
Naihua Xiu ◽  
Lingchen Kong ◽  
Yu Li

Compressive sensing microarrays (CSM) are DNA-based sensors that operate using group testing and compressive sensing principles. Mathematically, one can cast the CSM as sparse nonnegative recovery (SNR) which is to find the sparsest solutions subjected to an underdetermined system of linear equations and nonnegative restriction. In this paper, we discuss thel1relaxation of the SNR. By defining nonnegative restricted isometry/orthogonality constants, we give a nonnegative restricted property condition which guarantees that the SNR and thel1relaxation share the common unique solution. Besides, we show that any solution to the SNR must be one of the extreme points of the underlying feasible set.

Author(s):  
Hale Gonce Kocken ◽  
Inci Albayrak

Fuzzy system of linear equations (FSLE) plays a major role in various areas such as operational research, physics, statistics, economics, engineering, and social sciences since the parameters of FSLE are not always exactly known and stable in real-life problems. This effect may follow the lack of exact information, changeable economic conditions, etc. Although there exist many review papers on the solution methods for FSLE, they are not based on the applications. This chapter has attempted to provide a short review on real-life applications of FSLE. In addition, for the common application areas, the fundamental models and the solution methods are presented considering the most cited and leading papers in the literature.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
T. Yousefi Rezaii ◽  
S. Beheshti ◽  
M. A. Tinati

Solving the underdetermined system of linear equations is of great interest in signal processing application, particularly when the underlying signal to be estimated is sparse. Recently, a new sparsity encouraging penalty function is introduced as Linearized Exponentially Decaying penalty, LED, which results in the sparsest solution for an underdetermined system of equations subject to the minimization of the least squares loss function. A sequential solution is available for LED-based objective function, which is denoted by LED-SAC algorithm. This solution, which aims to sequentially solve the LED-based objective function, ignores the sparsity of the solution. In this paper, we present a new sparse solution. The new method benefits from the sparsity of the signal both in the optimization criterion (LED) and its solution path, denoted by Sparse SAC (2SAC). The new reconstruction method denoted by LED-2SAC (LED-Sparse SAC) is consequently more efficient and considerably fast compared to the LED-SAC algorithm, in terms of adaptability and convergence rate. In addition, the computational complexity of both LED-SAC and LED-2SAC is shown to be of order𝒪d2, which is better than the other batch solutions like LARS. LARS algorithm has complexity of order𝒪d3+nd2, wheredis the dimension of the sparse signal andnis the number of observations.


2021 ◽  
Vol 64 (2) ◽  
pp. 106-115
Author(s):  
Abolfazl Asudeh ◽  
Jees Augustine ◽  
Saravanan Thirumuruganathan ◽  
Azade Nazi ◽  
Nan Zhang ◽  
...  

Signal reconstruction problem (SRP) is an important optimization problem where the objective is to identify a solution to an underdetermined system of linear equations that is closest to a given prior. It has a substantial number of applications in diverse areas, such as network traffic engineering, medical image reconstruction, acoustics, astronomy, and many more. Unfortunately, most of the common approaches for solving SRP do not scale to large problem sizes. We propose a novel and scalable algorithm for solving this critical problem. Specifically, we make four major contributions. First, we propose a dual formulation of the problem and develop the DIRECT algorithm that is significantly more efficient than the state of the art. Second, we show how adapting database techniques developed for scalable similarity joins provides a substantial speedup over DIRECT. Third, we describe several practical techniques that allow our algorithm to scale---on a single machine---to settings that are orders of magnitude larger than previously studied. Finally, we use the database techniques of materialization and reuse to extend our result to dynamic settings where the input to the SRP changes. Extensive experiments on real-world and synthetic data confirm the efficiency, effectiveness, and scalability of our proposal.


Author(s):  
Asaf Ferber ◽  
Vishesh Jain ◽  
Yufei Zhao

Abstract Many problems in combinatorial linear algebra require upper bounds on the number of solutions to an underdetermined system of linear equations $Ax = b$ , where the coordinates of the vector x are restricted to take values in some small subset (e.g. $\{\pm 1\}$ ) of the underlying field. The classical ways of bounding this quantity are to use either a rank bound observation due to Odlyzko or a vector anti-concentration inequality due to Halász. The former gives a stronger conclusion except when the number of equations is significantly smaller than the number of variables; even in such situations, the hypotheses of Halász’s inequality are quite hard to verify in practice. In this paper, using a novel approach to the anti-concentration problem for vector sums, we obtain new Halász-type inequalities that beat the Odlyzko bound even in settings where the number of equations is comparable to the number of variables. In addition to being stronger, our inequalities have hypotheses that are considerably easier to verify. We present two applications of our inequalities to combinatorial (random) matrix theory: (i) we obtain the first non-trivial upper bound on the number of $n\times n$ Hadamard matrices and (ii) we improve a recent bound of Deneanu and Vu on the probability of normality of a random $\{\pm 1\}$ matrix.


Sign in / Sign up

Export Citation Format

Share Document