scholarly journals Image Compression Approach using Segmentation and Total Variation Regularization

2021 ◽  
Vol 15 ◽  
pp. 43-47
Author(s):  
Ahmad Shahin ◽  
Walid Moudani ◽  
Fadi Chakik

In this paper we present a hybrid model for image compression based on segmentation and total variation regularization. The main motivation behind our approach is to offer decode image with immediate access to objects/features of interest. We are targeting high quality decoded image in order to be useful on smart devices, for analysis purpose, as well as for multimedia content-based description standards. The image is approximated as a set of uniform regions: The technique will assign well-defined members to homogenous regions in order to achieve image segmentation. The Adaptive fuzzy c-means (AFcM) is a guide to cluster image data. A second stage coding is applied using entropy coding to remove the whole image entropy redundancy. In the decompression phase, the reverse process is applied in which the decoded image suffers from missing details due to the coarse segmentation. For this reason, we suggest the application of total variation (TV) regularization, such as the Rudin-Osher-Fatemi (ROF) model, to enhance the quality of the coded image. Our experimental results had shown that ROF may increase the PSNR and hence offer better quality for a set of benchmark grayscale images.

Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R311-R320 ◽  
Author(s):  
Ali Gholami ◽  
Ehsan Zabihi Naeini

Given the ill-conditioned nature of Dix inversion, the resultant Dix interval-velocity field is often unrealistic, noisy, and highly dependent on the quality of the provided root-mean-square velocities. While the classic least-squares regularization techniques, e.g., various forms of Tikhonov regularization, lead to somewhat suboptimal stability, we formulated the Dix inversion as a new constrained optimization problem. This enables one to incorporate prior knowledge as soft and/or hard bounds for the optimization, effectively treating it as a denoising problem. The solution to the problem is achieved by a bound-constrained total variation (TV) regularization. TV regularization has the advantage of being able to recover the discontinuities in the model, but it often comes with a large memory and compute requirements. Therefore, we have developed a simple and memory-efficient algorithm using iterative refinement strategy. The quality of the new algorithm is also cross-examined against different strategies, which are currently used in practice. Overall, we observe that the proposed method outperforms classic Dix inversion methods on the synthetic and real data examples.


2021 ◽  
Vol 13 (13) ◽  
pp. 2514
Author(s):  
Qianwei Dai ◽  
Hao Zhang ◽  
Bin Zhang

The chaos oscillation particle swarm optimization (COPSO) algorithm is prone to binge trapped in the local optima when dealing with certain complex models in ground-penetrating radar (GPR) data inversion, because it inherently suffers from premature convergence, high computational costs, and extremely slow convergence times, especially in the middle and later periods of iterative inversion. Considering that the bilateral connections between different particle positions can improve both the algorithmic searching efficiency and the convergence performance, we first develop a fast single-trace-based approach to construct an initial model for 2-D PSO inversion and then propose a TV-regularization-based improved PSO (TVIPSO) algorithm that employs total variation (TV) regularization as a constraint technique to adaptively update the positions of particles. B by adding the new velocity variations and optimal step size matrices, the search range of the random particles in the solution space can be significantly reduced, meaning blindness in the search process can be avoided. By introducing constraint-oriented regularization to allow the optimization search to move out of the inaccurate region, the premature convergence and blurring problems can be mitigated to further guarantee the inversion accuracy and efficiency. We report on three inversion experiments involving multilayered, fluctuated terrain models and a typical complicated inner-interface model to demonstrate the performance of the proposed algorithm. The results of the fluctuated terrain model show that compared with the COPSO algorithm, the fitness error (MAE) of the TVIPSO algorithm is reduced from 2.3715 to 1.0921, while for the complicated inner-interface model the fitness error (MARE) of the TVIPSO algorithm is reduced from 1.9539 to 1.5674.


2020 ◽  
Author(s):  
Lizhen Deng ◽  
Zhetao Zhou ◽  
Guoxia Xu ◽  
Hu Zhu ◽  
Bing-Kun Bao

Abstract Recently, many super-resolution algorithms have been proposed to recover high resolution images to improve visualization and help better analyze images. Among them, total variation regularization (TV) methods have been proven to have a good effect in retaining image edge information. However, these TV methods do not consider the temporal correlation between images. Our algorithm designs a new TV regularization (TV2++) to take advantage of the time dimension information of the images, further improving the utilization of useful information in the images. In addition, the union of global low rank regularization and TV regularization further enhances the image super resolution recovery. And we extend the exponential-type penalty (ETP) function on singular values of a matrix to enhance low-rank matrix recovery. A novel image super-resolution algorithm based on the ETP norm and TV2++ regularization is proposed. And the alternating direction method of multipliers (ADMM) is applied to solve the optimization problems effectively. Numerous experimental results prove that the proposed algorithm is superior to other algorithms.


2020 ◽  
Author(s):  
Lizhen Deng ◽  
Zhetao Zhou ◽  
Guoxia Xu ◽  
Hu Zhu ◽  
Bing-Kun Bao

Abstract Recently, many super-resolution algorithms have been proposed to recover high resolution images to improve visualization and help better analyze images. Among them, total variation regularization (TV) methods have been proven to have a good effect in retaining image edge information. However, these TV methods do not consider the temporal correlation between images. Our algorithm designs a new TV regularization (TV2++) to take advantage of the time dimension information of the images, further improving the utilization of useful information in the images. In addition, the union of global low rank regularization and TV regularization further enhances the image super resolution recovery. And we extend the exponential-type penalty (ETP) function on singular values of a matrix to enhance low-rank matrix recovery. A novel image super-resolution algorithm based on the ETP norm and TV2++ regularization is proposed. And the alternating direction method of multipliers (ADMM) is applied to solve the optimization problems effectively. Numerous experimental results prove that the proposed algorithm is superior to other algorithms.


Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. I13-I20 ◽  
Author(s):  
Williams A. Lima ◽  
Cristiano M. Martins ◽  
João B. Silva ◽  
Valeria C. Barbosa

We applied the mathematical basis of the total variation (TV) regularization to analyze the physicogeologic meaning of the TV method and compared it with previous gravity inversion methods (weighted smoothness and entropic Regularization) to estimate discontinuous basements. In the second part, we analyze the physicogeologic meaning of the TV method and compare it with previous gravity inversion methods (weighted smoothness and entropic regularization) to estimate discontinuous basements. Presenting a mathematical review of these methods, we show that minimizing the TV stabilizing function favors discontinuous solutions because a smooth solution, to honor the data, must oscillate, and the presence of these oscillations increases the value of the TV stabilizing function. These three methods are applied to synthetic data produced by a simulated 2D graben bordered by step faults. TV regularization and weighted smoothness are also applied to the real anomaly of Steptoe Valley, Nevada, U.S.A. In all applications, the three methods perform similarly. TV regularization, however, has the advantage, compared with weighted smoothness, of requiring no a priori information about the maximum depth of the basin. As compared with entropic regularization, TV regularization is much simpler to use because it requires, in general, the tuning of just one regularization parameter.


Author(s):  
Lizhen Deng ◽  
Zhetao Zhou ◽  
Guoxia Xu ◽  
Hu Zhu ◽  
Bing-Kun Bao

Abstract Recently, many super-resolution algorithms have been proposed to recover high-resolution images to improve visualization and help better analyze images. Among them, total variation regularization (TV) methods have been proven to have a good effect in retaining image edge information. However, these TV methods do not consider the temporal correlation between images. Our algorithm designs a new TV regularization (TV2++) to take advantage of the time dimension information of the images, further improving the utilization of useful information in the images. In addition, the union of global low rank regularization and TV regularization further enhances the image super-resolution recovery. And we extend the exponential-type penalty (ETP) function on singular values of a matrix to enhance low-rank matrix recovery. A novel image super-resolution algorithm based on the ETP norm and TV2++ regularization is proposed. And the alternating direction method of multipliers (ADMM) is applied to solve the optimization problems effectively. Numerous experimental results prove that the proposed algorithm is superior to other algorithms.


2015 ◽  
Vol 16 (1) ◽  
pp. 83
Author(s):  
Ansam Ennaciri ◽  
Mohammed Erritali ◽  
Mustapha Mabrouki ◽  
Jamaa Bengourram

The objective of this paper is to study the main characteristics of wavelets that affect the image compression by using the discrete wavelet transform and lead to an image data compression while preserving the essential quality of the original image. This implies a good compromise between the image compression ratio and the PSNR (Peak Signal Noise Ration).


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahmood Al-khassaweneh ◽  
Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-18 ◽  
Author(s):  
Paul Rodríguez

Total Variation (TV) regularization has evolved from an image denoising method for images corrupted with Gaussian noise into a more general technique for inverse problems such as deblurring, blind deconvolution, and inpainting, which also encompasses the Impulse, Poisson, Speckle, and mixed noise models. This paper focuses on giving a summary of the most relevant TV numerical algorithms for solving the restoration problem for grayscale/color images corrupted with several noise models, that is, Gaussian, Salt & Pepper, Poisson, and Speckle (Gamma) noise models as well as for the mixed noise scenarios, such the mixed Gaussian and impulse model. We also include the description of the maximum a posteriori (MAP) estimator for each model as well as a summary of general optimization procedures that are typically used to solve the TV problem.


Sign in / Sign up

Export Citation Format

Share Document