scholarly journals Bayesian Compress Sensing Based Countermeasure Scheme Against the Interrupted Sampling Repeater Jamming

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3279 ◽  
Author(s):  
Huan ◽  
Dai ◽  
Luo ◽  
Ai

The interrupted sampling repeater jamming (ISRJ) is considered an efficient deception method of jamming for coherent radar detection. However, current countermeasure methods against ISRJ interference may fail in detecting weak echoes, particularly when the transmitting power of the jammer is relatively high. In this paper, we propose a novel countermeasure scheme against ISRJ based on Bayesian compress sensing (BCS), where stable target signal can be reconstructed over a relatively large range of signal-to-noise ratio (SNR) for both single target and multi-target scenarios. By deriving the ISRJ jamming strategy, only the unjammed discontinuous time segments are extracted to build a sparse target model for the reconstruction algorithm. An efficient alternate iteration is applied to optimize and solve the maximum a posteriori estimate (MAP) of the sparse targets model. Simulation results demonstrate the robustness of the proposed scheme with low SNR or large jammer ratio. Moreover, when compared with traditional FFT or greedy sparsity adaptive matching pursuit algorithm (SAMP), the proposed algorithm significantly improves on the aspects of both the grating lobe level and target detection/false detection probability.

2018 ◽  
Vol 173 ◽  
pp. 03073
Author(s):  
Liu Yang ◽  
Ren Qinghua ◽  
Xu Bingzheng ◽  
Li Xiazhao

In order to solve the problem that the wideband compressive sensing reconstruction algorithm cannot accurately recover the signal under the condition of blind sparsity in the low SNR environment of the transform domain communication system. This paper use band occupancy rates to estimate sparseness roughly, at the same time, use the residual ratio threshold as iteration termination condition to reduce the influence of the system noise. Therefore, an ICoSaMP(Improved Compressive Sampling Matching Pursuit) algorithm is proposed. The simulation results show that compared with CoSaMP algorithm, the ICoSaMP algorithm increases the probability of reconstruction under the same SNR environment and the same sparse degree. The mean square error under the blind sparsity is reduced.


1997 ◽  
Vol 503 ◽  
Author(s):  
B. L. Evans ◽  
J. B. Martin ◽  
L. W. Burggraf

ABSTRACTThe viability of a Compton scattering tomography system for nondestructively inspecting thin, low Z samples for corrosion is examined. This technique differs from conventional x-ray backscatter NDI because it does not rely on narrow collimation of source and detectors to examine small volumes in the sample. Instead, photons of a single energy are backscattered from the sample and their scattered energy spectra are measured at multiple detector locations, and these spectra are then used to reconstruct an image of the object. This multiplexed Compton scatter tomography technique interrogates multiple volume elements simultaneously. Thin samples less than 1 cm thick and made of low Z materials are best imaged with gamma rays at or below 100 keV energy. At this energy, Compton line broadening becomes an important resolution limitation. An analytical model has been developed to simulate the signals collected in a demonstration system consisting of an array of planar high-purity germanium detectors. A technique for deconvolving the effects of Compton broadening and detector energy resolution from signals with additive noise is also presented. A filtered backprojection image reconstruction algorithm with similarities to that used in conventional transmission computed tomography is developed. A simulation of a 360–degree inspection gives distortion-free results. In a simulation of a single-sided inspection, a 5 mm × 5 mm corrosion flaw with 50% density is readily identified in 1-cm thick aluminum phantom when the signal to noise ratio in the data exceeds 28.


2021 ◽  
Vol 17 (1-2) ◽  
pp. 3-14
Author(s):  
Stathis C. Stiros ◽  
F. Moschas ◽  
P. Triantafyllidis

GNSS technology (known especially for GPS satellites) for measurement of deflections has proved very efficient and useful in bridge structural monitoring, even for short stiff bridges, especially after the advent of 100 Hz GNSS sensors. Mode computation from dynamic deflections has been proposed as one of the applications of this technology. Apart from formal modal analyses with GNSS input, and from spectral analysis of controlled free attenuating oscillations, it has been argued that simple spectra of deflections can define more than one modal frequencies. To test this scenario, we analyzed 21 controlled excitation events from a certain bridge monitoring survey, focusing on lateral and vertical deflections, recorded both by GNSS and an accelerometer. These events contain a transient and a following oscillation, and they are preceded and followed by intervals of quiescence and ambient vibrations. Spectra for each event, for the lateral and the vertical axis of the bridge, and for and each instrument (GNSS, accelerometer) were computed, normalized to their maximum value, and printed one over the other, in order to produce a single composite spectrum for each of the four sets. In these four sets, there was also marked the true value of modal frequency, derived from free attenuating oscillations. It was found that for high SNR (signal-to-noise ratio) deflections, spectral peaks in both acceleration and displacement spectra differ by up to 0.3 Hz from the true value. For low SNR, defections spectra do not match the true frequency, but acceleration spectra provide a low-precision estimate of the true frequency. This is because various excitation effects (traffic, wind etc.) contribute with numerous peaks in a wide range of frequencies. Reliable estimates of modal frequencies can hence be derived from deflections spectra only if excitation frequencies (mostly traffic and wind) can be filtered along with most measurement noise, on the basis of additional data.


2021 ◽  
Vol 11 (4) ◽  
pp. 1435
Author(s):  
Xue Bi ◽  
Lu Leng ◽  
Cheonshik Kim ◽  
Xinwen Liu ◽  
Yajun Du ◽  
...  

Image reconstruction based on sparse constraints is an important research topic in compressed sensing. Sparsity adaptive matching pursuit (SAMP) is a greedy pursuit reconstruction algorithm, which reconstructs signals without prior information of the sparsity level and potentially presents better reconstruction performance than other greedy pursuit algorithms. However, SAMP still suffers from being sensitive to the step size selection at high sub-sampling ratios. To solve this problem, this paper proposes a constrained backtracking matching pursuit (CBMP) algorithm for image reconstruction. The composite strategy, including two kinds of constraints, effectively controls the increment of the estimated sparsity level at different stages and accurately estimates the true support set of images. Based on the relationship analysis between the signal and measurement, an energy criterion is also proposed as a constraint. At the same time, the four-to-one rule is improved as an extra constraint. Comprehensive experimental results demonstrate that the proposed CBMP yields better performance and further stability than other greedy pursuit algorithms for image reconstruction.


2021 ◽  
pp. 197140092110087
Author(s):  
Andrea De Vito ◽  
Cesare Maino ◽  
Sophie Lombardi ◽  
Maria Ragusi ◽  
Cammillo Talei Franzesi ◽  
...  

Background and purpose To evaluate the added value of a model-based reconstruction algorithm in the assessment of acute traumatic brain lesions in emergency non-enhanced computed tomography, in comparison with a standard hybrid iterative reconstruction approach. Materials and methods We retrospectively evaluated a total of 350 patients who underwent a 256-row non-enhanced computed tomography scan at the emergency department for brain trauma. Images were reconstructed both with hybrid and model-based iterative algorithm. Two radiologists, blinded to clinical data, recorded the presence, nature, number, and location of acute findings. Subjective image quality was performed using a 4-point scale. Objective image quality was determined by computing the signal-to-noise ratio and contrast-to-noise ratio. The agreement between the two readers was evaluated using k-statistics. Results A subjective image quality analysis using model-based iterative reconstruction gave a higher detection rate of acute trauma-related lesions in comparison to hybrid iterative reconstruction (extradural haematomas 116 vs. 68, subdural haemorrhages 162 vs. 98, subarachnoid haemorrhages 118 vs. 78, parenchymal haemorrhages 94 vs. 64, contusive lesions 36 vs. 28, diffuse axonal injuries 75 vs. 31; all P<0.001). Inter-observer agreement was moderate to excellent in evaluating all injuries (extradural haematomas k=0.79, subdural haemorrhages k=0.82, subarachnoid haemorrhages k=0.91, parenchymal haemorrhages k=0.98, contusive lesions k=0.88, diffuse axonal injuries k=0.70). Quantitatively, the mean standard deviation of the thalamus on model-based iterative reconstruction images was lower in comparison to hybrid iterative one (2.12 ± 0.92 vsa 3.52 ± 1.10; P=0.030) while the contrast-to-noise ratio and signal-to-noise ratio were significantly higher (contrast-to-noise ratio 3.06 ± 0.55 vs. 1.55 ± 0.68, signal-to-noise ratio 14.51 ± 1.78 vs. 8.62 ± 1.88; P<0.0001). Median subjective image quality values for model-based iterative reconstruction were significantly higher ( P=0.003). Conclusion Model-based iterative reconstruction, offering a higher image quality at a thinner slice, allowed the identification of a higher number of acute traumatic lesions than hybrid iterative reconstruction, with a significant reduction of noise.


2014 ◽  
Vol 7 (5) ◽  
pp. 1901-1918 ◽  
Author(s):  
J. Ray ◽  
V. Yadav ◽  
A. M. Michalak ◽  
B. van Bloemen Waanders ◽  
S. A. McKenna

Abstract. The characterization of fossil-fuel CO2 (ffCO2) emissions is paramount to carbon cycle studies, but the use of atmospheric inverse modeling approaches for this purpose has been limited by the highly heterogeneous and non-Gaussian spatiotemporal variability of emissions. Here we explore the feasibility of capturing this variability using a low-dimensional parameterization that can be implemented within the context of atmospheric CO2 inverse problems aimed at constraining regional-scale emissions. We construct a multiresolution (i.e., wavelet-based) spatial parameterization for ffCO2 emissions using the Vulcan inventory, and examine whether such a~parameterization can capture a realistic representation of the expected spatial variability of actual emissions. We then explore whether sub-selecting wavelets using two easily available proxies of human activity (images of lights at night and maps of built-up areas) yields a low-dimensional alternative. We finally implement this low-dimensional parameterization within an idealized inversion, where a sparse reconstruction algorithm, an extension of stagewise orthogonal matching pursuit (StOMP), is used to identify the wavelet coefficients. We find that (i) the spatial variability of fossil-fuel emission can indeed be represented using a low-dimensional wavelet-based parameterization, (ii) that images of lights at night can be used as a proxy for sub-selecting wavelets for such analysis, and (iii) that implementing this parameterization within the described inversion framework makes it possible to quantify fossil-fuel emissions at regional scales if fossil-fuel-only CO2 observations are available.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dennis Kupitz ◽  
Heiko Wissel ◽  
Jan Wuestemann ◽  
Stephanie Bluemel ◽  
Maciej Pech ◽  
...  

Abstract Background The introduction of hybrid SPECT/CT devices enables quantitative imaging in SPECT, providing a methodological setup for quantitation using SPECT tracers comparable to PET/CT. We evaluated a specific quantitative reconstruction algorithm for SPECT data using a 99mTc-filled NEMA phantom. Quantitative and qualitative image parameters were evaluated for different parametrizations of the acquisition and reconstruction protocol to identify an optimized quantitative protocol. Results The reconstructed activity concentration (ACrec) and the signal-to-noise ratio (SNR) of all examined protocols (n = 16) were significantly affected by the parametrization of the weighting factor k used in scatter correction, the total number of iterations and the sphere volume (all, p < 0.0001). The two examined SPECT acquisition protocols (with 60 or 120 projections) had a minor impact on the ACrec and no significant impact on the SNR. In comparison to the known AC, the use of default scatter correction (k = 0.47) or object-specific scatter correction (k = 0.18) resulted in an underestimation of ACrec in the largest sphere volume (26.5 ml) by − 13.9 kBq/ml (− 16.3%) and − 7.1 kBq/ml (− 8.4%), respectively. An increase in total iterations leads to an increase in estimated AC and a decrease in SNR. The mean difference between ACrec and known AC decreased with an increasing number of total iterations (e.g., for 20 iterations (2 iterations/10 subsets) = − 14.6 kBq/ml (− 17.1%), 240 iterations (24i/10s) = − 8.0 kBq/ml (− 9.4%), p < 0.0001). In parallel, the mean SNR decreased significantly from 2i/10s to 24i/10s by 76% (p < 0.0001). Conclusion Quantitative SPECT imaging is feasible with the used reconstruction algorithm and hybrid SPECT/CT, and its consistent implementation in diagnostics may provide perspectives for quantification in routine clinical practice (e.g., assessment of bone metabolism). When combining quantitative analysis and diagnostic imaging, we recommend using two different reconstruction protocols with task-specific optimized setups (quantitative vs. qualitative reconstruction). Furthermore, individual scatter correction significantly improves both quantitative and qualitative results.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4623
Author(s):  
Sinead Barton ◽  
Salaheddin Alakkari ◽  
Kevin O’Dwyer ◽  
Tomas Ward ◽  
Bryan Hennelly

Raman spectroscopy is a powerful diagnostic tool in biomedical science, whereby different disease groups can be classified based on subtle differences in the cell or tissue spectra. A key component in the classification of Raman spectra is the application of multi-variate statistical models. However, Raman scattering is a weak process, resulting in a trade-off between acquisition times and signal-to-noise ratios, which has limited its more widespread adoption as a clinical tool. Typically denoising is applied to the Raman spectrum from a biological sample to improve the signal-to-noise ratio before application of statistical modeling. A popular method for performing this is Savitsky–Golay filtering. Such an algorithm is difficult to tailor so that it can strike a balance between denoising and excessive smoothing of spectral peaks, the characteristics of which are critically important for classification purposes. In this paper, we demonstrate how Convolutional Neural Networks may be enhanced with a non-standard loss function in order to improve the overall signal-to-noise ratio of spectra while limiting corruption of the spectral peaks. Simulated Raman spectra and experimental data are used to train and evaluate the performance of the algorithm in terms of the signal to noise ratio and peak fidelity. The proposed method is demonstrated to effectively smooth noise while preserving spectral features in low intensity spectra which is advantageous when compared with Savitzky–Golay filtering. For low intensity spectra the proposed algorithm was shown to improve the signal to noise ratios by up to 100% in terms of both local and overall signal to noise ratios, indicating that this method would be most suitable for low light or high throughput applications.


2018 ◽  
Vol 10 (5-6) ◽  
pp. 578-586 ◽  
Author(s):  
Simon Senega ◽  
Ali Nassar ◽  
Stefan Lindenmeier

AbstractFor a fast scan-phase satellite radio antenna diversity system a noise correction method is presented for a significant improvement of audio availability at low signal-to-noise ratio (SNR) conditions. An error analysis of the level and phase detection within the diversity system in the presence of noise leads to a correction method based on a priori knowledge of the system's noise floor. This method is described and applied in a hardware example of a satellite digital audio radio services antenna diversity circuit for fast fading conditions. Test drives, which have been performed in real fading scenarios, are described and results are analyzed statistically. Simulations of the scan-phase antenna diversity system show higher signal amplitudes and availabilities. Measurement results of dislocated antennas as well as of a diversity antenna set on a single mounting position are presented. A comparison of a diversity system with noise correction, the same system without noise correction, and a single antenna system with each other is performed. Using this new method in fast multipath fading driving scenarios underneath dense foliage with a low SNR of the antenna signals, a reduction in audio mute time by one order of magnitude compared with single antenna systems is achieved with the diversity system.


Sign in / Sign up

Export Citation Format

Share Document