scholarly journals On the Trade-Off Between Bit Depth and Number of Samples for a Basic Approach to Structured Signal Recovery From $b$ -Bit Quantized Linear Measurements

2018 ◽  
Vol 64 (6) ◽  
pp. 4159-4178 ◽  
Author(s):  
Martin Slawski ◽  
Ping Li
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Abdellatif Moudafi

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).


Electrician ◽  
2019 ◽  
Vol 13 (3) ◽  
Author(s):  
Umi Murdika ◽  
Lukmanul Hakim

Abstrak, — Metode Compressive Sensing merupakan metode yang banyak diaplikasikan pada pemrosesan sinyal. Kemampuan dan keunggulan metode ini mampu merekonstruksi sinyal dengan masukan yang terbatas. Makalah ini bertujuan menggunakan metode Compressive Sensing dalam mengolah sinyal digital waktu diskrit. Keutaman dari metode CS ini adalah memberikan perkiraan sinyal asli dari sejumlah kecil pengukuran linier inkoheren dengan memanfaatkan sifat kejarangannya Penyelesaian dengan metode Compressive Sensing menggunakan pendekatan sinyal sebagai kombinasi linier dari fungsi dasar yang merupakan matriks koefisien jarang (sparse matrix). Pemulihan sinyal dilakukan dengan meminimalkan ℓ1-norm dari persamaan sistem tersebut. Makalah ini menunjukkan bahwa dengan metode yang diterapkan pada pemrosessan sinyal, hanya dengan jumlah sinyal yang terbatas dapat dikembalikan lagi mendekati  dengan sinyal aslinya. Dengan perbedaan antara sinyal hasil pemulihan dengan sinyal asli yang cukup kecil. Kata kunci — Compressive Sensing, L1-norm, sinyal  waktu diskrit  pemulihan sinyal,  sinyal jarang.   Abstract — Compressive Sensing is a method that is widely applied to signal processing. The ability and superiority of this method is able to reconstruct signals with limited input. This paper aims to use the Compressive Sensing method in processing discrete time signals. The advantage of this CS method is to provide an original signal estimate from a small number of incoherent linear measurements by utilizing the sparsity properties. Solution using the Compressive Sensing method uses the signal approach as a linear combination of the basic functions which are sparse matrices. Signal recovery is done by minimizing L1-norm of the system equation. This paper shows that with the method applied to signal processing, a limited number of measurement signals can be returned close to the original signal. With the difference between the recovery signal and the original signal which is quite small. Keyword — Compressive Sensing, L1-norm, discrete time signals, recovery signal, sparse signal.


2019 ◽  
Vol 27 (1) ◽  
pp. 79-106
Author(s):  
Jan Kuske ◽  
Stefania Petra

Abstract The recovery of structured signals from a few linear measurements is a central point in both compressed sensing (CS) and discrete tomography. In CS the signal structure is described by means of a low complexity model e.g. co-/sparsity. The CS theory shows that any signal/image can be undersampled at a rate dependent on its intrinsic complexity. Moreover, in such undersampling regimes, the signal can be recovered by sparsity promoting convex regularization like ℓ1- or total variation (TV-) minimization. Precise relations between many low complexity measures and the sufficient number of random measurements are known for many sparsity promoting norms. However, a precise estimate of the undersampling rate for the TV seminorm is still lacking. We address this issue by: a) providing dual certificates testing uniqueness of a given cosparse signal with bounded signal values, b) approximating the undersampling rates via the statistical dimension of the TV descent cone and c) showing empirically that the provided rates also hold for tomographic measurements.


1982 ◽  
Vol 14 (2) ◽  
pp. 109-113 ◽  
Author(s):  
Suleyman Tufekci
Keyword(s):  

2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


Sign in / Sign up

Export Citation Format

Share Document