A Hybrid Image Compression Technique Using Neural Network and Vector Quantization With DCT

Author(s):  
Mohamed El Zorkany
Author(s):  
Diyar Waysi Naaman

Image compression research has increased dramatically as a result of the growing demands for image transmission in computer and mobile environments. It is needed especially for reduced storage and efficient image transmission and used to reduce the bits necessary to represent a picture digitally while preserving its original quality. Fractal encoding is an advanced technique of image compression. It is based on the image's forms as well as the generation of repetitive blocks via mathematical conversions. Because of resources needed to compress large data volumes, enormous programming time is needed, therefore Fractal Image Compression's main disadvantage is a very high encoding time where decoding times are extremely fast. An artificial intelligence technique similar to a neural network is used to reduce the search space and encoding time for images by employing a neural network algorithm known as the “back propagation” neural network algorithm. Initially, the image is divided into fixed-size and domains. For each range block its most matched domain is selected, its range index is produced and best matched domains index is the expert system's input, which reduces matching domain blocks in sets of results. This leads in the training of the neural network. This trained network is now used to compress other images which give encoding a lot less time. During the decoding phase, any random original image, converging after some changes to the Fractal image, reciprocates the transformation parameters. The quality of this FIC is indeed demonstrated by the simulation findings. This paper explores a unique neural network FIC that is capable of increasing neural network speed and image quality simultaneously.


Author(s):  
T. Satish Kumar ◽  
S. Jothilakshmi ◽  
Batholomew C. James ◽  
M. Prakash ◽  
N. Arulkumar ◽  
...  

In the present digital era, the exploitation of medical technologies and massive generation of medical data using different imaging modalities, adequate storage, management, and transmission of biomedical images necessitate image compression techniques. Vector quantization (VQ) is an effective image compression approach, and the widely employed VQ technique is Linde–Buzo–Gray (LBG), which generates local optimum codebooks for image compression. The codebook construction is treated as an optimization issue solved with utilization of metaheuristic optimization techniques. In this view, this paper designs an effective biomedical image compression technique in the cloud computing (CC) environment using Harris Hawks Optimization (HHO)-based LBG techniques. The HHO-LBG algorithm achieves a smooth transition among exploration as well as exploitation. To investigate the better performance of the HHO-LBG technique, an extensive set of simulations was carried out on benchmark biomedical images. The proposed HHO-LBG technique has accomplished promising results in terms of compression performance and reconstructed image quality.


Author(s):  
Noritaka Shigei ◽  
◽  
Hiromi Miyajima ◽  
Michiharu Maeda ◽  
Lixin Ma ◽  
...  

Multiple-VQ methods generate multiple independent codebooks to compress an image by using a neural network algorithm. In the image restoration, the methods restore low quality images from the multiple codebooks, and then combine the low quality ones into a high quality one. However, the naive implementation of these methods increases the compressed data size too much. This paper proposes two improving techniques to this problem: “index inference” and “ranking based index coding.” It is shown that index inference and ranking based index coding are effective for smaller and larger codebook sizes, respectively.


Sign in / Sign up

Export Citation Format

Share Document