scholarly journals Feature Keypoint-Based Image Compression Technique Using a Well-Posed Nonlinear Fourth-Order PDE-Based Model

Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 930
Author(s):  
Tudor Barbu

A digital image compression framework based on nonlinear partial differential equations (PDEs) is proposed in this research article. First, a feature keypoint-based sparsification algorithm is proposed for the image coding stage. The interest keypoints corresponding to various scale-invariant image feature descriptors, such as SIFT, SURF, MSER, ORB, and BRIEF, are extracted, and the points from their neighborhoods are then used as sparse pixels and coded using a lossless encoding scheme. An effective nonlinear fourth-order PDE-based scattered data interpolation is proposed for solving the decompression task. A rigorous mathematical investigation of the considered PDE model is also performed, with the well-posedness of this model being demonstrated. It is then solved numerically by applying a consistent finite difference method-based numerical approximation algorithm that is next successfully applied in the image compression and decompression experiments, which are also discussed in this work.

Author(s):  
Kandarpa Kumar Sarma

The explosive growths in data exchanges have necessitated the development of new methods of image compression including use of learning based techniques. The learning based systems aids proper compression and retrieval of the image segments. Learning systems like. Artificial Neural Networks (ANN) have established their efficiency and reliability in achieving image compression. In this work, two approaches to use ANNs in Feed Forward (FF) form and another based on Self Organizing Feature Map (SOFM) is proposed for digital image compression. The image to be compressed is first decomposed into smaller blocks and passed to FFANN and SOFM networks for generation of codebooks. The compressed images are reconstructed using a composite block formed by a FFANN and a Discrete Cosine Transform (DCT) based compression-decompression system. Mean Square Error (MSE), Compression ratio (CR) and Peak Signal-to-Noise Ratio (PSNR) are used to evaluate the performance of the system.


Author(s):  
SAEMA ENJELA ◽  
A.G. ANANTH

Fractal coding is a novel method to compress images, which was proposed by Barnsley, and implemented by Jacquin. It offers many advantages. Fractal image coding has the advantage of higher compression ratio, but is a lossy compression scheme. The encoding procedure consists of dividing the image into range blocks and domain blocks and then it takes a range block and matches it with the domain block. The image is encoded by partitioning the domain block and using affine transformation to achieve fractal compression. The image is reconstructed using iterative functions and inverse transforms. However, the encoding time of traditional fractal compression technique is too long to achieve real-time image compression, so it cannot be widely used. Based on the theory of fractal image compression; this paper raised an improved algorithm form the aspect of image segmentation. In the present work the fractal coding techniques are applied for the compression of satellite imageries. The Peak Signal to Noise Ratio (PSNR) values are determined for images namely Satellite Rural image and Satellite Urban image. The Matlab simulation results for the reconstructed image shows that PSNR values achievable for Satellite Rural image ~33 and for Satellite urban image ~42.


Author(s):  
Magy El Banhawy ◽  
Walaa Saber ◽  
Fathy Amer

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahmood Al-khassaweneh ◽  
Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.


2011 ◽  
Vol 19 (1) ◽  
Author(s):  
Y. Hu ◽  
J. Chuang ◽  
C. Lo ◽  
C. Li

AbstractIn this paper, a novel greyscale image coding technique based on vector quantization (VQ) is proposed. In VQ, the reconstructed image quality is restricted by the codebook used in the image encoding/decoding procedures. To provide a better image quality using a fixed-sized codebook, the codebook expansion technique is introduced in the proposed technique. In addition, the block prediction technique and the relatively address technique are employed to cut down the required storage cost of the compressed codes. From the results, it is shown that the proposed technique adaptively provides better image quality at low bit rates than VQ.


Author(s):  
Chanintorn Jittawiriyanukoon ◽  
Vilasinee Srisarkun

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Farhan Hussain ◽  
Jechang Jeong

A compression technique for still digital images is proposed with deep neural networks (DNNs) employing rectified linear units (ReLUs). We tend to exploit the DNNs capabilities to find a reasonable estimate of the underlying compression/decompression relationships. We aim for a DNN for image compression purpose that has better generalization property and reduced training time and support real time operation. The use of ReLUs which map more plausibly to biological neurons, makes the training of our DNN significantly faster, shortens the encoding/decoding time, and improves its generalization ability. The introduction of the ReLUs establishes an efficient gradient propagation, induces sparsity in the proposed network, and is efficient in terms of computations making these networks suitable for real time compression systems. Experiments performed on standard real world images show that using ReLUs instead of logistic sigmoid units speeds up the training of the DNN by converging markedly faster. The evaluation of objective and subjective quality of reconstructed images also proves that our DNN achieves better generalization as most of the images are never seen by the network before.


Author(s):  
Rehna. V. J ◽  
Jeyakumar. M. K

Computer technology these days is most focused on storage space and speed. Considerable advancements in this direction can be achieved through the usage of digital image compression techniques. In this paper we present a well studied singular value decomposition based JPEG image compression technique. Singular Value Decomposition is a way of factorizing matrices into a series of linear approximations that expose the underlying structure of the matrix. SVD is extraordinarily useful and has many applications such as data analysis, signal processing, pattern recognition, objects detection and weather prediction. An attempt is made to implement this method of factorization to perform second round of compression on JPEG images to optimize storage space. Compression is further enhanced by the removal of singularity after the initial compression performed using SVD. MATLAB R2010a with image processing toolbox is used as the development tool for implementing the algorithm.


2019 ◽  
Vol 32 (2) ◽  
pp. 145
Author(s):  
Aqeel K. Kadhim ◽  
Abo Bakir S. Merchany ◽  
Ameen Babakir

 Uncompressed form of the digital images are needed a very large storage capacity amount, as a consequence requires large communication bandwidth for data transmission over the network. Image compression techniques not only minimize the image storage space but also preserve the quality of image. This paper reveal image compression technique which uses distinct image coding scheme based on wavelet transform that combined effective types of compression algorithms for further compression. EZW and SPIHT algorithms are types of significant compression techniques that obtainable for lossy image compression algorithms. The EZW coding is a worthwhile and simple efficient algorithm. SPIHT is an most powerful technique that utilize for image compression depend on the concept of coding set of wavelet coefficients as zero trees. The proposed compression algorithm that combined dual image compression techniques (DICT) invest an excellent features from each methods, which then produce promising technique for still image compression through minimize bits number that required to represent the input image, to the degree allowed without significant impact on quality of reconstructed image. The experimental results present that DICT will improve the image compression efficiency between 8 to 24%, and will result high performance of metrics values.


Sign in / Sign up

Export Citation Format

Share Document