scholarly journals No-Reference Video Quality Metrics Using Wavelet Transform with Several Noises

Author(s):  
Salah Sleibi Al-Rawi

This paper presents a blind assessment of video quality called the no-reference video quality metric. The proposed scheme used video watermarking that involves 8x8 blocks of Haar and Daubechies (D4) wavelet transform within the RGB domain. However, several noise types have experimented, those are salt-and-pepper, Gaussian blur, Gaussian and distort ripple. In addition, JPEG compression has been tested within the proposed method. In this paper, a robust scheme is proposed against the attacks of swapping, frame dropping, statistical analysis and averaging. The experimental results of the proposed system give acceptable outcomes. The Daubechies4 filter has given a better result than Haar filter. The perceived video quality metric has been performed through Peak Signal to Noise Ratio (PSNR) and Root Mean Square (MSE) metric. Daubechies4 has given the best result as compared to Haar filter and Discrete Cosine Transform (DCT).

This paper aims in presenting a thorough comparison of performance and usefulness of multi-resolution based de-noising technique. Multi-resolution based image denoising techniques overcome the limitation of Fourier, spatial, as well as, purely frequency based techniques, as it provides the information of 2-Dimensional (2-D) signal at different levels and scales, which is desirable for image de-noising. The multiresolution based de-noising techniques, namely, Contourlet Transform (CT), Non Sub-sampled Contourlet Transform (NSCT), Stationary Wavelet Transform (SWT) and Discrete Wavelet Transform (DWT), have been selected for the de-noising of camera images. Further, the performance of different denosing techniques have been compared in terms of different noise variances, thresholding techniques and by using well defined metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Root Mean Square Error (RMSE). Analysis of result shows that shift-invariant NSCT technique outperforms the CT, SWT and DWT based de-noising techniques in terms of qualititaive and quantitative objective evaluation


The research constitutes a distinctive technique of steganography of image. The procedure used for the study is Fractional Random Wavelet Transform (FRWT). The contrast between wavelet transform and the aforementioned FRWT is that it comprises of all the benefits and features of the wavelet transform but with additional highlights like randomness and partial fractional value put up into it. As a consequence of the fractional value and the randomness, the algorithm will give power and a rise in the surveillance layers for steganography. The stegano image will be acquired after administrating the algorithm which contains not only the coated image but also the concealed image. Despite the overlapping of two images, any diminution in the grade of the image is not perceived. Through this steganographic process, we endeavor for expansion in surveillance and magnitude as well. After running the algorithm, various variables like Mean Square Error (MSE) and Peak Signal to Noise ratio (PSNR) are deliberated. Through the intended algorithm, a rise in the power and imperceptibility is perceived and it can also support diverse modification such as scaling, translation and rotation with algorithms which previously prevailed. The irrefutable outcome demonstrated that the algorithm which is being suggested is indeed efficacious.


2019 ◽  
Vol 81 (6) ◽  
Author(s):  
A. Nazifah Abdullah ◽  
S. H. K. Hamadi ◽  
M. Isa ◽  
B. Ismail ◽  
A. N. Nanyan ◽  
...  

Partial discharge (PD) measurement is an essential to detect and diagnose the existence of the PD. However, this measurement has faced noise disturbance in industrial environments. Thus, PD analysis system using discrete wavelet transform (DWT) denoising technique via Laboratory Virtual Instrument Engineering Workbench (LabVIEW) software is proposed to distinguish noise from the measured PD signal. In this work, the performance of denoising process is analyzed based on calculated mean square error (MSE) and signal to noise ratio (SNR). The result is manipulated based on Haar, Daubechies, Coiflets, Symlets and Biorthogonal type of mother wavelet with different decomposition levels. From the SNR results, all types of the mother wavelet are suitable to be used in denoising technique since the value of SNR is in large positive value. Therefore, further studies were conducted and found out that db14, coif3, sym5 and bior5.5 wavelets with least MSE value are considered good to be used in the denoising technique. However, bior5.5 wavelet is proposed as the most optimum mother wavelet due to consistency of producing minimum value of MSE and followed by db14.


2012 ◽  
Vol 226-228 ◽  
pp. 335-339
Author(s):  
Xiang Bi An ◽  
Lei Chen ◽  
Cheng Fa Chen ◽  
Yun Chuan Bai ◽  
Chao Yang

In the process of running-in vibration signal denoising for remanufacturing engine based on wavelet transform, there are relatively large differences in denoising results when different wavelet bases are used, so the selection of wavelet bases affect the consequent of de-noising processing. On the base of analyzing the characteristics of the commonly used wavelet bases, the paper has compared the impact of the wavelet base on signal denoising. Using SNR(Signal to Noise Ratio) and RMSE(Root Mean Square Error) as criteria, and combined with the characteristics of the running-in vibration signal of the remanufacturing engine, the wavelet base coif4 has been selected as the optimistic base, which has relatively better denoising effect, and improves the SNR and resolution, so it can be used in actual practice.


Author(s):  
A.V. Akhmametieva ◽  
A.A. Baraniuk

Copyright protection of digital content is a rather actual problem of humanity in the 21st century. Misuses of multimedia content is very common, and their number is growing with each passing day. One type of copyright protection is the embedding of digital watermark (DW) in the content. In this paper a new method of embedding digital watermark into image using discrete cosine transform, lifting wavelet transform (LWT) with maternal wavelet "Dobeshi-8" and singular coefficients decomposition is proposed. Embedding is performed into the first singular number of the low frequency wavelet transform region. As a digital watermark, we will use a grayscale image normalized to a range from zero to ten to provide a high peak signal-to-noise ratio (PSNR). The research analyzed the developed method: the method of embedding and detecting information was tested for its resistance to various types of attacks, namely: application of noise overlay (Gauss and pulse noise, "salt and pepper"), "unsharp" filter and median filter, and compression attack (with quality coefficients for a complete container from 60 to 100). As a result of the conducted testing, it was established that the method is quite resistant to all the attacks, except for the "unsharp" filtering (the resulting performance is not satisfactory). The method showed good results in peak signal-to-noise ratio - the average PSNR value is 50.5 dB, as well as high rates of similarity between the embedded DW and the extracted one - from 77% to 97.6% while saving the full container in a lossless format, and up to 53, 05 dB and 91.96% while saving the image in a lossless format (JPEG).


Author(s):  
Sasirekha K. ◽  
Thangavel K.

For a long time, image enhancement techniques have been widely used to improve the image quality in many image processing applications. Recently, deep learning models have been applied to image enhancement problems with great success. In the domain of biometric, fingerprint and face play a vital role to authenticate a person in the right way. Hence, the enhancement of these images significantly improves the recognition rate. In this chapter, undecimated wavelet transform (UDWT) and deep autoencoder are hydridized to enhance the quality of images. Initially, the images are decomposed with Daubechies wavelet filter. Then, deep autoencoder is trained to minimize the error between reconstructed and actual input. The experiments have been conducted on real-time fingerprint and face images collected from 150 subjects, each with 10 orientations. The signal to noise ratio (SNR), peak signal to noise ratio (PSNR), mean square error (MSE), and root mean square error (RMSE) have been computed and compared. It was observed that the proposed model produced a biometric image with high quality.


Author(s):  
Pushpa Koranga ◽  
Garima Singh ◽  
Dikendra Verma ◽  
Shshank Chaube ◽  
Anuj Kumar ◽  
...  

The image often contains noises due to several factors such as a problem in devices or due to an environmental problem etc. Noise is mainly undesired information, which degrades the quality of the picture. Therefore, image denoising method is adopted to remove the noises from the degraded image which in turn improve the quality of the image. In this paper, image denoising has been done by wavelet transform using Visu thresholding techniques for different wavelet families. PSNR (Peak signal to noise ratio) and RMSE (Root Mean Square Error) value is also calculated for different wavelet families.


2021 ◽  
pp. 198-206
Author(s):  
Sami Hasan ◽  
Shereen S. Jumaa

The main targets for using the edge detection techniques in image processing are to reduce the number of features and find the edge of image based-contents. In this paper, comparisons have been demonstrated between classical methods (Canny, Sobel, Roberts, and Prewitt) and Fuzzy Logic Technique to detect the edges of different samples of image's contents and patterns. These methods are tested to detect edges of images that are corrupted with different types of noise such as (Gaussian, and Salt and pepper). The performance indices are mean square error and peak signal to noise ratio (MSE and PSNR). Finally, experimental results show that the proposed Fuzzy rules and membership function provide better results for both noisy and noise-free images.


2020 ◽  
Vol 55 (1) ◽  
Author(s):  
Nassir H. Salman ◽  
S. Rafea

Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurrence through the sub intervals between the range 0 and 1. Finally, the stream of compressed tables is reassembled for decompressing (image restoration). The results showed a compression gain of 10-12% and less time consumption when applying this type of coding to each block rather than the entire image. To improve the compression ratio, the second approach was used based on the YCbCr colour model. In this regard, images were decomposed into four sub-bands (low-low, high-low, low-high, and high-high) by using the discrete wavelet transform compression algorithm. Then, the low-low sub-band was transmuted to frequency components (low and high) via discrete wavelet transform. Next, these components were quantized by using scalar quantization and then scanning in a zigzag way. The compression ratio result is 15.1 to 27.5 for magnetic resonance imaging with a different peak signal to noise ratio and mean square error; 25 to 43 for X-ray images; 32 to 46 for computed tomography scan images; and 19 to 36 for magnetic resonance imaging brain images. The second approach showed an improved compression scheme compared to the first approach considering compression ratio, peak signal to noise ratio, and mean square error.


Author(s):  
Meryem Felja ◽  
Asmae Bencheqroune ◽  
Mohammed Karim ◽  
Ghita Bennis

Electroencephalogram (EEG) is a signal of an electrical nature reflecting the neuronal activities of the brain. It is used for the diagnosis of certain cerebral pathologies. However, it becomes more difficult to identify and analyze it when it is corrupted by artifacts of non-cerebral origin such as eye movements, cardiac activities ..., therefore, it is essential to remove these parasitic signals. In literature, there are different techniques for removing artifacts. This paper proposes and discusses a new EEG de-noising technique, based on a combination of wavelet transforms and conventional filters. The results of the proposed method are evaluated using three common criteria: signal-to-noise-ratio (SNR), mean square error (MSE) and cross correletion function (CCF). These experimental results demonstrate that the proposed approach can be an effective tool for removing artifact without suppression of any signal components.


Sign in / Sign up

Export Citation Format

Share Document