scholarly journals One-bit compressive sensing of dictionary-sparse signals

2017 ◽  
Vol 7 (1) ◽  
pp. 83-104 ◽  
Author(s):  
R Baraniuk ◽  
S Foucart ◽  
D Needell ◽  
Y Plan ◽  
M Wootters

Abstract One-bit compressive sensing has extended the scope of sparse recovery by showing that sparse signals can be accurately reconstructed even when their linear measurements are subject to the extreme quantization scenario of binary samples—only the sign of each linear measurement is maintained. Existing results in one-bit compressive sensing rely on the assumption that the signals of interest are sparse in some fixed orthonormal basis. However, in most practical applications, signals are sparse with respect to an overcomplete dictionary, rather than a basis. There has already been a surge of activity to obtain recovery guarantees under such a generalized sparsity model in the classical compressive sensing setting. Here, we extend the one-bit framework to this important model, providing a unified theory of one-bit compressive sensing under dictionary sparsity. Specifically, we analyze several different algorithms—based on convex programming and on hard thresholding—and show that, under natural assumptions on the sensing matrix (satisfied by Gaussian matrices), these algorithms can efficiently recover analysis–dictionary-sparse signals in the one-bit model.

2011 ◽  
Vol 403-408 ◽  
pp. 1937-1940
Author(s):  
Zhi Zhen Zhu ◽  
Zhi Da Zhang ◽  
Fa Lin Liu ◽  
Bin Bing Li

In conventional synthetic aperture radar (SAR) systems, the resolution of SAR image is basically constrained by Nyquist sampling rate. It increases the requirement on A/D converter and the capacity of memories with higher resolution requirements. Compressive sensing (CS) is a possible solution to these problems. From the viewpoint of CS, sparse signals can be reconstructed from a small set of their linear measurements. In this paper, we proposed a strategy of SAR based on compressive sensing. Raw data from SAR are processed by the method of CS in the range direction with random convolution matrix as its recovery matrix firstly, and after reconstruction of the range direction the conventional azimuth compression with the use of matched filtering is carried out. The simulation results are given to prove the feasibility of the strategy. Compared to the conventional method, the proposed strategy has lower sidelobes in the range direction. Furthermore, the proposed method also possesses the anti-noise capability to certain extent.


Author(s):  
Nan Meng ◽  
Yun-Bin Zhao

AbstractSparse signals can be possibly reconstructed by an algorithm which merges a traditional nonlinear optimization method and a certain thresholding technique. Different from existing thresholding methods, a novel thresholding technique referred to as the optimal k-thresholding was recently proposed by Zhao (SIAM J Optim 30(1):31–55, 2020). This technique simultaneously performs the minimization of an error metric for the problem and thresholding of the iterates generated by the classic gradient method. In this paper, we propose the so-called Newton-type optimal k-thresholding (NTOT) algorithm which is motivated by the appreciable performance of both Newton-type methods and the optimal k-thresholding technique for signal recovery. The guaranteed performance (including convergence) of the proposed algorithms is shown in terms of suitable choices of the algorithmic parameters and the restricted isometry property (RIP) of the sensing matrix which has been widely used in the analysis of compressive sensing algorithms. The simulation results based on synthetic signals indicate that the proposed algorithms are stable and efficient for signal recovery.


2014 ◽  
Vol 35 (3) ◽  
pp. 568-574 ◽  
Author(s):  
Zhi-zhen Zhu ◽  
Zhi-da Zhang ◽  
Fa-lin Liu ◽  
Bin-bing Li ◽  
Chong-bin Zhou

1989 ◽  
Vol 21 (8-9) ◽  
pp. 1057-1064 ◽  
Author(s):  
Vijay Joshi ◽  
Prasad Modak

Waste load allocation for rivers has been a topic of growing interest. Dynamic programming based algorithms are particularly attractive in this context and are widely reported in the literature. Codes developed for dynamic programming are however complex, require substantial computer resources and importantly do not allow interactions of the user. Further, there is always resistance to utilizing mathematical programming based algorithms for practical applications. There has been therefore always a gap between theory and practice in systems analysis in water quality management. This paper presents various heuristic algorithms to bridge this gap with supporting comparisons with dynamic programming based algorithms. These heuristics make a good use of the insight gained in the system's behaviour through experience, a process akin to the one adopted by field personnel and therefore can readily be understood by a user familiar with the system. Also they allow user preferences in decision making via on-line interaction. Experience has shown that these heuristics are indeed well founded and compare very favourably with the sophisticated dynamic programming algorithms. Two examples have been included which demonstrate such a success of the heuristic algorithms.


Axioms ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 41
Author(s):  
Alexander Šostak ◽  
Ingrīda Uļjane ◽  
Māris Krastiņš

Noticing certain limitations of concept lattices in the fuzzy context, especially in view of their practical applications, in this paper, we propose a more general approach based on what we call graded fuzzy preconcept lattices. We believe that this approach is more adequate for dealing with fuzzy information then the one based on fuzzy concept lattices. We consider two possible gradation methods of fuzzy preconcept lattice—an inner one, called D-gradation and an outer one, called M-gradation, study their properties, and illustrate by a series of examples, in particular, of practical nature.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Hussein Soffar ◽  
Mohamed F. Alsawy

Abstract Background Neuronavigation is a very beneficial tool in modern neurosurgical practice. However, the neuronavigation is not available in most of the hospitals in our country raising the question about its importance in localizing the calvarial extra-axial lesions and to what extent it is safe to operate without it. Methods We studied twenty patients with calvarial extra-axial lesions who underwent surgical interventions. All lesions were preoperatively located with both neuronavigation and the usual linear measurements. Both methods were compared regarding the time consumed to localize the tumor and the accuracy of each method to anticipate the actual center of the tumor. Results The mean error of distance between the planned center of the tumor and the actual was 6.50 ± 1.762 mm in conventional method, whereas the error was 3.85 ± 1.309 mm in IGS method. Much more time was consumed during the neuronavigation method including booting, registration, and positioning. A statistically significant difference was found between the mean time passed in the conventional method and IGS method (2.05 ± 0.826, 24.90 ± 1.334, respectively), P-value < 0.001. Conclusion In the setting of limited resources, the linear measurement localization method seems to have an accepted accuracy in the localization of calvarial extra-axial lesions and it saves more time than neuronavigation method.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. O91-O104 ◽  
Author(s):  
Georgios Pilikos ◽  
A. C. Faul

Extracting the maximum possible information from the available measurements is a challenging task but is required when sensing seismic signals in inaccessible locations. Compressive sensing (CS) is a framework that allows reconstruction of sparse signals from fewer measurements than conventional sampling rates. In seismic CS, the use of sparse transforms has some success; however, defining fixed basis functions is not trivial given the plethora of possibilities. Furthermore, the assumption that every instance of a seismic signal is sparse in any acquisition domain under the same transformation is limiting. We use beta process factor analysis (BPFA) to learn sparse transforms for seismic signals in the time slice and shot record domains from available data, and we use them as dictionaries for CS and denoising. Algorithms that use predefined basis functions are compared against BPFA, with BPFA obtaining state-of-the-art reconstructions, illustrating the importance of decomposing seismic signals into learned features.


2019 ◽  
Vol 35 (2) ◽  
pp. 1045-1051 ◽  
Author(s):  
Clotaire Michel ◽  
Blaise Duvernay ◽  
Ehrfried Kölz ◽  
Navid Jamali ◽  
Pierino Lestuzzi

The framework to evaluate the benefit of seismic upgrading of Galanis et al. (2018) is compared to that present in the Swiss seismic code for existing buildings since 2004, updated in 2017. To illustrate the comparison, the example building of Galanis et al. (2018) in Zurich is analyzed following the Swiss code. It is shown that the concept of Degree of Seismic Upgrade is not relevant for practical applications. More generally, the approach of Galanis et al. (2018) would be more suited to a risk-based framework (like the Swiss code) than to a performance-based framework like the one they followed. For existing buildings, we claim that it is appropriate to define the retrofitting strategy based on the absolute level of risk, whereas targeting the safety level of the design code is rarely cost-efficient.


Author(s):  
Ljubiša Stanković ◽  
Miloš Daković ◽  
Isidora Stanković

Sign in / Sign up

Export Citation Format

Share Document