Comparative Performance Analysis of Optimization Techniques on Vector Quantization for Image Compression

Author(s):  
Karri Chiranjeevi ◽  
Umaranjan Jena ◽  
Sonali Dash

Linde-Buzo-Gray (LBG) Vector Quantization (VQ), technically generates local codebook after many runs on different sets of training images for image compression. The key role of VQ is to generate global codebook. In this paper, we present comparative performance analysis of different optimization techniques. Firefly and Cuckoo search generate a near global codebook, but undergoes problem when non-availability of brighter fireflies and convergence time is very high respectively. Hybrid Cuckoo Search (HCS) algorithm was developed and tested on four benchmark functions, that optimizes the LBG codebook with less convergence rate by taking McCulloch's algorithm based levy flight and variant of searching parameters. Practically, we observed that Bat algorithm (BA) peak signal to noise ratio is better than LBG, FA, CS and HCS in between 8 to 256 codebook sizes. The convergence time of BA is 2.4452, 2.734 and 1.5126 times faster than HCS, CS and FA respectively.

Author(s):  
T. Satish Kumar ◽  
S. Jothilakshmi ◽  
Batholomew C. James ◽  
M. Prakash ◽  
N. Arulkumar ◽  
...  

In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.


2021 ◽  
pp. 1-10
Author(s):  
Diwakar Tripathi ◽  
B. Ramachandra Reddy ◽  
Y.C.A. Padmanabha Reddy ◽  
Alok Kumar Shukla ◽  
Ravi Kant Kumar ◽  
...  

Credit scoring plays a vital role for financial institutions to estimate the risk associated with a credit applicant applied for credit product. It is estimated based on applicants’ credentials and directly affects to viability of issuing institutions. However, there may be a large number of irrelevant features in the credit scoring dataset. Due to irrelevant features, the credit scoring models may lead to poorer classification performances and higher complexity. So, by removing redundant and irrelevant features may overcome the problem with large number of features. In this work, we emphasized on the role of feature selection to enhance the predictive performance of credit scoring model. Towards to feature selection, Binary BAT optimization technique is utilized with a novel fitness function. Further, proposed approach aggregated with “Radial Basis Function Neural Network (RBFN)”, “Support Vector Machine (SVM)” and “Random Forest (RF)” for classification. Proposed approach is validated on four bench-marked credit scoring datasets obtained from UCI repository. Further, the comprehensive investigational results analysis are directed to show the comparative performance of the classification tasks with features selected by various approaches and other state-of-the-art approaches for credit scoring.


In the recent days, the importance of image compression techniques is exponentially increased due to the generation of massive amount of data which needs to be stored or transmitted. Numerous approaches have been presented for effective image compression by the principle of representing images in its compact form through the avoidance of unnecessary pixels. Vector quantization (VA) is an effective method in image compression and the construction of quantization table is an important process is an important task. The compression performance and the quality of reconstructed data are based on the quantization table, which is actually a matrix of 64 integers. The quantization table selection is a complex combinatorial problem which can be resolved by the evolutionary algorithms (EA). Presently, EA became famous to resolve the real world problems in a reasonable amount of time. This chapter introduces Firefly (FF) with Teaching and learning based optimization (TLBO) algorithm termed as FF-TLBO algorithm for the selection of quantization table and introduces Firefly with Tumbling algorithm termed as FF-Tumbling algorithm for the selection of search space. As the FF algorithm faces a problem when brighter FFs are insignificant, the TLBO algorithm is integrated to it to resolve the problem and Tumbling efficiently train the algorithm to explore all direction in the solution space. This algorithm determines the best fit value for every bock as local best and best fitness value for the entire image is considered as global best. When these values are found by FF algorithm, compression process takes place by efficient image compression algorithm like Run Length Encoding and Huffman coding. The proposed FF-TLBO and FF-Tumbling algorithm is evaluated by comparing its results with existing FF algorithm using a same set of benchmark images in terms of Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR). The obtained results ensure the superior performance of FF-TLBO and FF-Tumbling algorithm over FF algorithm and make it highly useful for real time applications.


Author(s):  
Sanjith Sathya Joseph ◽  
R. Ganesan

Image compression is the process of reducing the size of a file without humiliating the quality of the image to an unacceptable level by Human Visual System. The reduction in file size allows as to store more data in less memory and speed up the transmission process in low bandwidth also, in case of satellite images it reduces the time required for the image to reach the ground station. In order to increase the transmission process compression plays an important role in remote sensing images.  This paper presents a coding scheme for satellite images using Vector Quantization. And it is a well-known technique for signal compression, and it is also the generalization of the scalar quantization.  The given satellite image is compressed using VCDemo software by creating codebooks for vector quantization and the quality of the compressed and decompressed image is compared by the Mean Square Error, Signal to Noise Ratio, Peak Signal to Noise Ratio values.


Sign in / Sign up

Export Citation Format

Share Document