scholarly journals Comparative analysis between non-linear wavelet based image denoising techniques

2018 ◽  
Vol 5 ◽  
pp. 58-67
Author(s):  
Milan Chikanbanjar

Digital images have been a major form of transmission of visual information, but due to the presence of noise, the image gets corrupted. Thus, processing of the received image needs to be done before being used in an application. Denoising of image involves data manipulation to remove noise in order to produce a good quality image retaining different details. Quantitative measures have been used to show the improvement in the quality of the restored image by the use of various thresholding techniques by the use of parameters mainly, MSE (Mean Square Error), PSNR (Peak-Signal-to-Noise-Ratio) and SSIM (Structural Similarity index). Here, non-linear wavelet transform denoising techniques of natural images are studied, analyzed and compared using thresholding techniques such as soft, hard, semi-soft, LevelShrink, SUREShrink, VisuShrink and BayesShrink. On most of the tests, PSNR and SSIM values for LevelShrink Hard thresholding method is higher as compared to other thresholding methods. For instance, from tests PSNR and SSIM values of lena image for VISUShrink Hard, VISUShrink Soft, VISUShrink Semi Soft, LevelShrink Hard, LevelShrink Soft, LevelShrink Semi Soft, SUREShrink, BayesShrink thresholding methods at the variance of 10 are 23.82, 16.51, 23.25, 24.48, 23.25, 20.67, 23.42, 23.14 and 0.28, 0.28, 0.28, 0.29, 0.22, 0.25, 0.16 respectively which shows that the PSNR and SSIM values for LevelShrink Hard thresholding method is higher as compared to other thresholding methods, and so on. Thus, it can be stated that the performance of LevelShrink Hard thresholding method is better on most of tests.

Author(s):  
Jelena Vlaović ◽  
Drago Žagar ◽  
Snježana Rimac-Drlje ◽  
Mario Vranješ

With the development of Video on Demand applications due to the availability of high-speed internet access, adaptive streaming algorithms have been developing and improving. The focus is on improving user’s Quality of Experience (QoE) and taking it into account as one of the parameters for the adaptation algorithm. Users often experience changing network conditions, so the goal is to ensure stable video playback with satisfying QoE level. Although subjective Video Quality Assessment (VQA) methods provide more accurate results regarding user’s QoE, objective VQA methods cost less and are less time-consuming. In this article, nine different objective VQA methods are compared on a large set of video sequences with various spatial and temporal activities. VQA methods used in this analysis are: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), MultiScale Structural Similarity Index (MS-SSIM), Video Quality Metric (VQM), Mean Sum of Differences (DELTA), Mean Sum of Absolute Differences (MSAD), Mean Squared Error (MSE), Netflix Video Multimethod Assessment Fusion (Netflix VMAF) and Visual Signal-to-Noise Ratio (VSNR). The video sequences used for testing purposes were encoded according to H.264/AVC with twelve different target coding bitrates, at three different spatial resolutions (resulting in a total of 190 sequences). In addition to objective quality assessment, subjective quality assessment was performed for these sequences. All results acquired by objective VQA methods have been compared with subjective Mean Opinion Score (MOS) results using Pearson Linear Correlation Coefficient (PLCC). Measurement results obtained on a large set of video sequences with different spatial resolutions show that VQA methods like SSIM and VQM correlate better with MOS results compared to PSNR, SSIM, VSNR, DELTA, MSE, VMAF and MSAD. However, the PLCC results for SSIM and VQM are too low (0.7799 and 0.7734, respectively), for the usage of these methods in streaming services instead of subjective testing. These results suggest that more efficient VQA methods should be developed to be used in streaming testing procedures as well as to support the video segmentation process. Furthermore, when comparing results obtained for different spatial resolutions, it can be concluded that the quality of video sequences encoded at lower spatial resolutions in cases of lower target coding bitrate is higher compared to the quality of video sequences encoded at higher spatial resolutions at the same target coding bitrate, particularly when video sequences with higher spatial and temporal information are used.


2021 ◽  
Vol 36 (1) ◽  
pp. 642-649
Author(s):  
G. Sharvani Reddy ◽  
R. Nanmaran ◽  
Gokul Paramasivam

Aim: Image is the most powerful tool to analyze the information. Sometimes the captured image gets affected with blur and noise in the environment, which degrades the quality of the image. Image restoration is a technique in image processing where the degraded image can be restored or recovered to its nearest original image. Materials and Methods: In this research Lucy-Richardson algorithm is used for restoring blurred and noisy images using MATLAB software. And the proposed work is compared with Wiener filter, and the sample size for each group is 30. Results: The performance was compared based on three parameters, Power Signal to Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Normalized Correlation (NC). High values of PSNR, SSIM and NC indicate the better performance of restoration algorithms. Lucy-Richardson provides a mean PSNR of 10.4086db, mean SSIM of 0.4173%, and NC of 0.7433% and Wiener filter provides a mean PSNR of 6.3979db, SSIM of 0.3016%, NC of 0.3276%. Conclusion: Based on the experimental results and statistical analysis using independent sample T test, image restoration using Lucy-Richardson algorithm significantly performs better than Wiener filter on restoring the degraded image with PSNR (P<0.001) and SSIM (P<0.001).


2020 ◽  
Vol 25 (2) ◽  
pp. 86-97
Author(s):  
Sandy Suryo Prayogo ◽  
Tubagus Maulana Kusuma

DVB merupakan standar transmisi televisi digital yang paling banyak digunakan saat ini. Unsur terpenting dari suatu proses transmisi adalah kualitas gambar dari video yang diterima setelah melalui proses transimisi tersebut. Banyak faktor yang dapat mempengaruhi kualitas dari suatu gambar, salah satunya adalah struktur frame dari video. Pada tulisan ini dilakukan pengujian sensitifitas video MPEG-4 berdasarkan struktur frame pada transmisi DVB-T. Pengujian dilakukan menggunakan simulasi matlab dan simulink. Digunakan juga ffmpeg untuk menyediakan format dan pengaturan video akan disimulasikan. Variabel yang diubah dari video adalah bitrate dan juga group-of-pictures (GOP), sedangkan variabel yang diubah dari transmisi DVB-T adalah signal-to-noise-ratio (SNR) pada kanal AWGN di antara pengirim (Tx) dan penerima (Rx). Hasil yang diperoleh dari percobaan berupa kualitas rata-rata gambar pada video yang diukur menggunakan metode pengukuran structural-similarity-index (SSIM). Dilakukan juga pengukuran terhadap jumlah bit-error-rate BER pada bitstream DVB-T. Percobaan yang dilakukan dapat menunjukkan seberapa besar sensitifitas bitrate dan GOP dari video pada transmisi DVB-T dengan kesimpulan semakin besar bitrate maka akan semakin buruk nilai kualitas gambarnya, dan semakin kecil nilai GOP maka akan semakin baik nilai kualitasnya. Penilitian diharapkan dapat dikembangkan menggunakan deep learning untuk memperoleh frame struktur yang tepat di kondisi-kondisi tertentu dalam proses transmisi televisi digital.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1636
Author(s):  
Noé Ortega-Sánchez ◽  
Diego Oliva ◽  
Erik Cuevas ◽  
Marco Pérez-Cisneros ◽  
Angel A. Juan

The techniques of halftoning are widely used in marketing because they reduce the cost of impression and maintain the quality of graphics. Halftoning converts a digital image into a binary image conformed by dots. The output of the halftoning contains less visual information; a possible benefit of this task is the reduction of ink when graphics are printed. The human eye is not able to detect the absence of information, but the printed image stills have good quality. The most used method for halftoning is called Floyd-Steinberger, and it defines a specific matrix for the halftoning conversion. However, most of the proposed techniques in halftoning use predefined kernels that do not permit adaptation to different images. This article introduces the use of the harmony search algorithm (HSA) for halftoning. The HSA is a popular evolutionary algorithm inspired by the musical improvisation. The different operators of the HSA permit an efficient exploration of the search space. The HSA is applied to find the best configuration of the kernel in halftoning; meanwhile, as an objective function, the use of the structural similarity index (SSIM) is proposed. A set of rules are also introduced to reduce the regular patterns that could be created by non-appropriate kernels. The SSIM is used due to the fact that it is a perception model used as a metric that permits comparing images to interpret the differences between them numerically. The aim of combining the HSA with the SSIM for halftoning is to generate an adaptive method that permits estimating the best kernel for each image based on its intrinsic attributes. The graphical quality of the proposed algorithm has been compared with classical halftoning methodologies. Experimental results and comparisons provide evidence regarding the quality of the images obtained by the proposed optimization-based approach. In this context, classical algorithms have a lower graphical quality in comparison with our proposal. The results have been validated by a statistical analysis based on independent experiments over the set of benchmark images by using the mean and standard deviation.


2014 ◽  
Vol 46 (1) ◽  
pp. 53-74 ◽  
Author(s):  
Colin Robertson ◽  
Jed A. Long ◽  
Farouk S. Nathoo ◽  
Trisalyn A. Nelson ◽  
Cameron C. F. Plouffe

Author(s):  
Indrarini Dyah Irawati ◽  
Sugondo Hadiyoso ◽  
Gelar Budiman ◽  
Asep Mulyana

Compressed sampling in the application of magnetic resonance imaging compression requires high accuracy when reconstructing from a small number of samples. Sparsity in magnetic resonance images is a fundamental requirement in compressed sampling. In this paper, we proposed the lifting wavelet transform sparsity technique by taking wavelet coefficients on the low pass sub-band that contains meaningful information. The application of novel methods useful for compressing data with the highest compression ratio at the sender but still maintaining high accuracy at the receiver. These wavelet coefficient values are arranged to form a sparse vector. We explore the performance of the proposed method by testing at several levels of lifting wavelet transform decomposition, include Levels 2, 3, 4, 5, and 6. The second requirement for compressed sampling is the acquisition technique. The data sampled sparse vectors using a normal distributed random measurement matrix. This matrix is normalized to the average energy of the image pixel block. The last compressed sampling requirement is a reconstruction algorithm. In this study, we analyze three reconstruction algorithms, namely Level 1 magic, iteratively reweighted least squares, and orthogonal matching pursuit, based on structural similarity index measured and peak signal to noise ratio metrics. Experimental results show that magnetic resonance imaging can be reconstructed with higher structural similarity index measured and peak signal to noise ratio using the lifting wavelet transform sparsity technique at a minimum decomposition level of 4. The proposed lifting wavelet transforms and Level 1 magic reconstruction algorithm has the best performance compared to the others at the measurement rate range between 10 to 70. This method also outperforms the techniques in previous studies.


2020 ◽  
Vol 9 (4) ◽  
pp. 1461-1467
Author(s):  
Indrarini Dyah Irawati ◽  
Sugondo Hadiyoso ◽  
Yuli Sun Hariyani

In this study, we proposed compressive sampling for MRI reconstruction based on sparse representation using multi-wavelet transformation. Comparing the performance of wavelet decomposition level, which are Level 1, Level 2, Level 3, and Level 4. We used gaussian random process to generate measurement matrix. The algorithm used to reconstruct the image is . The experimental results showed that the use of wavelet multi-level can generate higher compression ratio but requires a longer processing time. MRI reconstruction results based on the parameters of the peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) show that the higher the level of decomposition in wavelets, the value of both decreases.


2020 ◽  
Vol 24 (1) ◽  
Author(s):  
Srikant Kumar Beura ◽  
Amol Arjun Jawale ◽  
Bishnulatpam Pushpa Devi ◽  
Prabir Saha

Inexact computing is an attractive concept for digital signal processing at the submicron regime. This paper proposes 2-bit inexact adder cell and further escalate to 4-bit, and 8-bit inexact adder and error metrics have been evaluated mathematically for such adder cells. The approximated design has been proposed through the simplification of the K-Maps, which leads to a substantial reduction in the propagation delay as well as energy consumption. The proposed design has been verified through the Cadence Spectre and performance parameters (such as delay, power consumption) have been evaluated through CMOS gpdk45 nm technology. Furthermore, the proposed design has been applied to image de-noising application where the performance of the images like Peak Signal to Noise Ratio (PSNR), Normalized Correlation Coefficient (NCC) and Structural Similarity Index (SSIM) has been analyzed through MATLAB, which offer the substantial improvement from its counterpart.


Sign in / Sign up

Export Citation Format

Share Document