scholarly journals Використання візуальних метрик для аналізу стиснення з втратами зашумлених зображень

2021 ◽  
pp. 83-91
Author(s):  
Богдан Віталійович Коваленко ◽  
Володимир Васильович Лукін

The subject of the article is to analyze the effectiveness of lossy image compression using a BPG encoder using visual metrics as a quality criterion. The aim is to confirm the existence of an operating point for images of varying complexity for visual quality metrics. The objectives of the paper are the following: to analyze for a set of images of varying complexity, where images are distorted by additive white Gaussian noise with different variance values, build and analyze dependencies for visual image quality metrics, provide recommendations on the choice of parameters for compression in the vicinity of the operating point. The methods used are the following: methods of mathematical statistics; methods of digital image processing. The following results were obtained. Dependencies of visual quality metrics for images of various degrees of complexity affected by noise with variance equal to 64, 100, and 196. It can be seen from the constructed dependence that a working point is present for images of medium and low complexity for both the PSNR-HVS-M and MS-SSIM metrics. Recommendations are given for choosing a parameter for compression based on the obtained dependencies. Conclusions. Scientific novelty of the obtained results is the following: for a new compression method using Better Portable Graphics (BPG), research has been conducted and the existence of an operating point for visual quality metrics has been proven, previously such studies were conducted only for the PSNR metric.The test images were distorted by additive white Gaussian noise and then compressed using the methods implemented in the BPG encoder. The images were compressed with different values of the Q parameter, which made it possible to estimate the image compression quality at different values of compression ratio. The resulting data made it possible to visualize the dependence of the visual image quality metric on the Q parameter. Based on the obtained dependencies, it can be concluded that the operating point is present both for the PSNR-HVS-M metric and for the MS-SSIM for images of medium and low complexity, it is also worth noting that, especially clearly, the operating point is noticeable at large noise variance values. As a recommendation, a formula is presented for calculating the value of the compression control parameter (for the case with the BPG encoder, it is the Q parameter) for images distorted by noise with variance varying within a wide range, on the assumption that the noise variance is a priori known or estimated with high accuracy.

Author(s):  
Gareth D Hastings ◽  
Raymond A Applegate ◽  
Alexander W Schill ◽  
Chuan Hu ◽  
Daniel R Coates ◽  
...  

2020 ◽  
Vol 20 (7) ◽  
pp. 20 ◽  
Author(s):  
Gareth D. Hastings ◽  
Jason D. Marsack ◽  
Larry N. Thibos ◽  
Raymond A. Applegate

2021 ◽  
Author(s):  
Nima Nikvand

In this thesis, the problem of data denoising is studied, and two new denoising approaches are proposed. Using statistical properties of the additive noise, the methods provide adaptive data-dependent soft thresholding techniques to remove the additive noise. The proposed methods, Point-wise Noise Invlaidating Soft Thresholding (PNIST) and Accumulative Noise Invalidation Soft Thresholding (ANIST), are based on Noise Invalidation. The invalidation exploits basic properties of the additive noise in order to remove the noise effects as much as possible. There are similarities and differences between ANIST and PNIST. While PNIST performs better in the case of additive white Gaussian noise, ANIST can be used with both Gaussian and non Gaussian additive noise. As part of a data denoising technique, a new noise variance estimation is also proposed. The thresholds proposed by NIST approaches are comparable to the shrinkage methods, and our simulation results promise that the new methods can outperform the existing approaches in various applications. We also explore the area of image denoising as one of the main applications of data denoising and extend the proposed approaches to two dimensional applications. Simulations show that the proposed methods outperform common shrinkage methods and are comparable to the famous BayesShrink method in terms of Mean Square Error and visual quality.


2012 ◽  
Vol 2 (2) ◽  
pp. 53-58
Author(s):  
Shaikh Enayet Ullah ◽  
Md. Golam Rashed ◽  
Most. Farjana Sharmin

In this paper, we made a comprehensive BER simulation study of a quasi- orthogonal space time block encoded (QO-STBC) multiple-input single output(MISO) system. The communication system under investigation has incorporated four digital modulations (QPSK, QAM, 16PSK and 16QAM) over an Additative White Gaussian Noise (AWGN) and Raleigh fading channels for three transmit and one receive antennas. In its FEC channel coding section, three schemes such as Cyclic, Reed-Solomon and ½-rated convolutionally encoding have been used. Under implementation of merely low complexity ML decoding based channel estimation and RSA cryptographic encoding /decoding algorithms, it is observable from conducted simulation test on encrypted text message transmission that the communication system with QAM digital modulation and ½-rated convolutionally encoding techniques is highly effective to combat inherent interferences under Raleigh fading and additive white Gaussian noise (AWGN) channels. It is also noticeable from the study that the retrieving performance of the communication system degrades with the lowering of the signal to noise ratio (SNR) and increasing in order of modulation.


Author(s):  
Amin Zribi ◽  
Sonia Zaibi ◽  
Ramesh Pyndiah ◽  
Ammar Bouallègue

Motivated by recent results in Joint Source/Channel (JSC) coding and decoding, this paper addresses the problem of soft input decoding of Arithmetic Codes (AC). A new length-constrained scheme for JSC decoding of these codes is proposed based on the Maximum a posteriori (MAP) sequence estimation criterion. The new decoder, called Chase-like arithmetic decoder is supposed to know the source symbol sequence and the compressed bit-stream lengths. First, Packet Error Rates (PER) in the case of transmission on an Additive White Gaussian Noise (AWGN) channel are investigated. Compared to classical arithmetic decoding, the Chase-like decoder shows significant improvements. Results are provided for Chase-like decoding for image compression and transmission on an AWGN channel. Both lossy and lossless image compression schemes were studied. As a final application, the serial concatenation of an AC with a convolutional code was considered. Iterative decoding, performed between the two decoders showed substantial performance improvement through iterations.


2020 ◽  
Vol 2020 (10) ◽  
pp. 137-1-137-6 ◽  
Author(s):  
Mykola Ponomarenko ◽  
Oleg Ieremeiev ◽  
Vladimir Lukin ◽  
Karen Egiazarian

Traditional approach to collect mean opinion score (MOS) values for evaluation of full-reference image quality metrics has two serious drawbacks. The first drawback is a nonlinearity of MOS, only partially compensated by the use of rank order correlation coefficients in a further analysis. The second drawback are limitations on number of distortion types and distortion levels in image database imposed by a maximum allowed time to carry out an experiment. One of the largest of databases used for this purpose, TID2013, has almost reached these limitations, which makes an extension of TID2013 within the boundaries of this approach to be practically unfeasible. In this paper, a novel methodology to collect MOS values, with a possibility to infinitely increase a size of a database by adding new types of distortions, is proposed. For the proposed methodology, MOS values are collected for pairs of distortions, one of them being a signal dependent Gaussian noise. A technique of effective linearization and normalization of MOS is described. Extensive experiments for linearization of MOS values to extend TID2013 database are carried out.


Author(s):  
Андрей Сергеевич Рубель ◽  
Владимир Васильевич Лукин

Images are subject to noise during acquisition, transmission and processing. Image denoising is highly desirable, not only to provide better visual quality, but also to improve performance of the subsequent operations such as compression, segmentation, classification, object detection and recognition. In the past decades, a large number of image denoising algorithms has been developed, ranging from simple linear methods to complex methods based on similar blocks search and deep convolutional neural networks. However, most of existing denoising techniques have a tendency to oversmooth image edges, fine details and textures. Thus, there are cases when noise reduction leads to loss of image features and filtering does not produce better visual quality. According to this, it is very important to evaluate denoising result and hence to undertake a decision whether denoising is expedient. Despite the fact that image denoising has been one of the most active research areas, only a little work has been dedicated to visual quality evaluation for denoised images. There are many approaches and metrics to characterize image quality, but adequateness of these metrics is of question. Existing image quality metrics, especially no-reference ones, have not been thoroughly studies for image denoising. In terms of using visual quality metrics, it is usually supposed that the higher the improvement for a given metric, the better visual quality for denoised image. However, there are situations when denoising does not result in visual quality enhancement, especially for texture images. Thus, it would be desirable to predict human subjective evaluation for denoised image. Then, this information will clarify when denoising can be expedient. The purpose of this paper is to give analysis of denoising expedience using no-reference (NR) image quality metrics. In addition, this work considers possible ways to predict human subjective evaluation of denoised images based on several input parameters. More in details, two denoising techniques, namely the standard sliding window DCT filter and the BM3D filter have been considered. Using a specialized database of test images SubjectiveIQA, performance evaluation of existing state-of-the-art objective no-reference quality metrics for denoised images is carried out


2021 ◽  
Author(s):  
Nima Nikvand

In this thesis, the problem of data denoising is studied, and two new denoising approaches are proposed. Using statistical properties of the additive noise, the methods provide adaptive data-dependent soft thresholding techniques to remove the additive noise. The proposed methods, Point-wise Noise Invlaidating Soft Thresholding (PNIST) and Accumulative Noise Invalidation Soft Thresholding (ANIST), are based on Noise Invalidation. The invalidation exploits basic properties of the additive noise in order to remove the noise effects as much as possible. There are similarities and differences between ANIST and PNIST. While PNIST performs better in the case of additive white Gaussian noise, ANIST can be used with both Gaussian and non Gaussian additive noise. As part of a data denoising technique, a new noise variance estimation is also proposed. The thresholds proposed by NIST approaches are comparable to the shrinkage methods, and our simulation results promise that the new methods can outperform the existing approaches in various applications. We also explore the area of image denoising as one of the main applications of data denoising and extend the proposed approaches to two dimensional applications. Simulations show that the proposed methods outperform common shrinkage methods and are comparable to the famous BayesShrink method in terms of Mean Square Error and visual quality.


Sign in / Sign up

Export Citation Format

Share Document