scholarly journals An Extended Reweighted ℓ1 Minimization Algorithm for Image Restoration

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3224
Author(s):  
Sining Huang ◽  
Yupeng Chen ◽  
Tiantian Qiao

This paper proposes an effective extended reweighted ℓ1 minimization algorithm (ERMA) to solve the basis pursuit problem minu∈Rnu1:Au=f in compressed sensing, where A∈Rm×n, m≪n. The fast algorithm is based on linearized Bregman iteration with soft thresholding operator and generalized inverse iteration. At the same time, it also combines the iterative reweighted strategy that is used to solve minu∈Rnupp:Au=f problem, with the weight ωiu,p=ε+ui2p/2−1. Numerical experiments show that this l1 minimization persistently performs better than other methods. Especially when p=0, the restored signal by the algorithm has the highest signal to noise ratio. Additionally, this approach has no effect on workload or calculation time when matrix A is ill-conditioned.

2021 ◽  
Vol 11 (4) ◽  
pp. 1591
Author(s):  
Ruixia Liu ◽  
Minglei Shu ◽  
Changfang Chen

The electrocardiogram (ECG) is widely used for the diagnosis of heart diseases. However, ECG signals are easily contaminated by different noises. This paper presents efficient denoising and compressed sensing (CS) schemes for ECG signals based on basis pursuit (BP). In the process of signal denoising and reconstruction, the low-pass filtering method and alternating direction method of multipliers (ADMM) optimization algorithm are used. This method introduces dual variables, adds a secondary penalty term, and reduces constraint conditions through alternate optimization to optimize the original variable and the dual variable at the same time. This algorithm is able to remove both baseline wander and Gaussian white noise. The effectiveness of the algorithm is validated through the records of the MIT-BIH arrhythmia database. The simulations show that the proposed ADMM-based method performs better in ECG denoising. Furthermore, this algorithm keeps the details of the ECG signal in reconstruction and achieves higher signal-to-noise ratio (SNR) and smaller mean square error (MSE).


2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Eric Järpe ◽  
Mattias Weckstén

A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.


1995 ◽  
Vol 85 (1) ◽  
pp. 308-319 ◽  
Author(s):  
Jin Wang ◽  
Ta-Liang Teng

Abstract An artificial neural network-based pattern classification system is applied to seismic event detection. We have designed two types of Artificial Neural Detector (AND) for real-time earthquake detection. Type A artificial neural detector (AND-A) uses the recursive STA/LTA time series as input data, and type B (AND-B) uses moving window spectrograms as input data to detect earthquake signals. The two AND's are trained under supervised learning by using a set of seismic recordings, and then the trained AND's are applied to another set of recordings for testing. Results show that the accuracy of the artificial neural network-based seismic detectors is better than that of the conventional algorithms solely based on the STA/LTA threshold. This is especially true for signals with either low signal-to-noise ratio or spikelike noises.


2018 ◽  
Vol 616 ◽  
pp. A82 ◽  
Author(s):  
B. Proxauf ◽  
R. da Silva ◽  
V. V. Kovtyukh ◽  
G. Bono ◽  
L. Inno ◽  
...  

We gathered more than 1130 high-resolution optical spectra for more than 250 Galactic classical Cepheids. The spectra were collected with the optical spectrographs UVES at VLT, HARPS at 3.6 m, FEROS at 2.2 m MPG/ESO, and STELLA. To improve the effective temperature estimates, we present more than 150 new line depth ratio (LDR) calibrations that together with similar calibrations already available in the literature allowed us to cover a broad range in wavelength (5348 ≤ λ ≤ 8427 Å) and in effective temperature (3500 ≤ Teff ≤ 7700 K). This gives us the unique opportunity to cover both the hottest and coolest phases along the Cepheid pulsation cycle and to limit the intrinsic error on individual measurements at the level of ~100 K. As a consequence of the high signal-to-noise ratio of individual spectra, we identified and measured hundreds of neutral and ionized lines of heavy elements, and in turn, have the opportunity to trace the variation of both surface gravity and microturbulent velocity along the pulsation cycle. The accuracy of the physical parameters and the number of Fe I (more than one hundred) and Fe II (more than ten) lines measured allowed us to estimate mean iron abundances with a precision better than 0.1 dex. We focus on 14 calibrating Cepheids for which the current spectra cover either the entire or a significant portion of the pulsation cycle. The current estimates of the variation of the physical parameters along the pulsation cycle and of the iron abundances agree very well with similar estimates available in the literature. Independent homogeneous estimates of both physical parameters and metal abundances based on different approaches that can constrain possible systematics are highly encouraged.


2017 ◽  
Vol 8 (4) ◽  
pp. 58-83 ◽  
Author(s):  
Abdul Kayom Md Khairuzzaman ◽  
Saurabh Chaudhury

Multilevel thresholding is a popular image segmentation technique. However, computational complexity of multilevel thresholding increases very rapidly with increasing number of thresholds. Metaheuristic algorithms are applied to reduce computational complexity of multilevel thresholding. A new method of multilevel thresholding based on Moth-Flame Optimization (MFO) algorithm is proposed in this paper. The goodness of the thresholds is evaluated using Kapur's entropy or Otsu's between class variance function. The proposed method is tested on a set of benchmark test images and the performance is compared with PSO (Particle Swarm Optimization) and BFO (Bacterial Foraging Optimization) based methods. The results are analyzed objectively using the fitness function and the Peak Signal to Noise Ratio (PSNR) values. It is found that MFO based multilevel thresholding method performs better than the PSO and BFO based methods.


2021 ◽  
Vol 253 ◽  
pp. 11012
Author(s):  
H. Imam

The particle flux increase (pile-up) at the HL-LHC with luminosities of L = 7.5 × 1034 cm−2 s−1 will have a significant impact on the reconstruction of the ATLAS detector and on the performance of the trigger. The forward region and the end-cap where the internal tracker has poorer longitudinal track impact parameter resolution, and where the liquid argon calorimeter has coarser granularity, will be significantly affected. A High Granularity Time Detector (HGTD) is proposed to be installed in front of the LAr end-cap calorimeter for the mitigation of the pileup effect, as well as measurement of luminosity. It will have coverage of 2.4 to 4.0 from the pseudo-rapidity range. Two dual-sided silicon sensor layers will provide accurate timing information for minimum-ionizing particles with a resolution better than 30 ps per track (before irradiation), for assigning each particle to the correct vertex. The readout cells are about 1.3 mm × 1.3 mm in size, which leads to a high granular detector with 3 million channels. The technology of low-gain avalanche detectors (LGAD) with sufficient gain was chosen to achieve the required high signal-to-noise ratio. A dedicated ASIC is under development with some prototypes already submitted and evaluated. The requirements and general specifications of the HGTD will be maintained and discussed. R&D campaigns on the LGAD are carried out to study the sensors, the related ASICs and the radiation hardness. Both laboratory and test beam results will be presented.


2018 ◽  
Vol 115 (5) ◽  
pp. 927-932 ◽  
Author(s):  
Fuchen Liu ◽  
David Choi ◽  
Lu Xie ◽  
Kathryn Roeder

Community detection is challenging when the network structure is estimated with uncertainty. Dynamic networks present additional challenges but also add information across time periods. We propose a global community detection method, persistent communities by eigenvector smoothing (PisCES), that combines information across a series of networks, longitudinally, to strengthen the inference for each period. Our method is derived from evolutionary spectral clustering and degree correction methods. Data-driven solutions to the problem of tuning parameter selection are provided. In simulations we find that PisCES performs better than competing methods designed for a low signal-to-noise ratio. Recently obtained gene expression data from rhesus monkey brains provide samples from finely partitioned brain regions over a broad time span including pre- and postnatal periods. Of interest is how gene communities develop over space and time; however, once the data are divided into homogeneous spatial and temporal periods, sample sizes are very small, making inference quite challenging. Applying PisCES to medial prefrontal cortex in monkey rhesus brains from near conception to adulthood reveals dense communities that persist, merge, and diverge over time and others that are loosely organized and short lived, illustrating how dynamic community detection can yield interesting insights into processes such as brain development.


2018 ◽  
Vol 32 (16) ◽  
pp. 1850169 ◽  
Author(s):  
Bingchang Zhou ◽  
Qianqian Qi

We investigate the phenomenon of stochastic resonance (SR) in parallel integrate-and-fire neuronal arrays with threshold driven by additive noise or signal-dependent noise (SDN) and a noisy input signal. SR occurs in this system. Whether the system is subject to the additive noise or SDN, the input noise [Formula: see text] weakens the performance of SR but the array size N and signal parameter [Formula: see text] promote the performance of SR. Signal parameter [Formula: see text] promotes the performance of SR for the additive noise, but the peak values of the output signal-to-noise ratio [Formula: see text] first decrease, then increase as [Formula: see text] increases for the SDN. Moreover, when [Formula: see text] tends to infinity, for the SDN, the curve of [Formula: see text] first increases and then decreases, however, for the additive noise, the curve of [Formula: see text] increases to reach a plain. By comparing system performance with the additive noise to one with SDN, we also find that the information transmission of a periodic signal with SDN is significantly better than one with the additive noise in limited array size N.


Sign in / Sign up

Export Citation Format

Share Document