scholarly journals Enhanced Image Compression and Processing Scheme

Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Image compression refers to the process of encoding image using fewer number of bits. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers’ cases. This paper titled Enhanced Lossless Image Compression Scheme is aimed at providing an enhanced lossless image compression scheme based on Bose, Chaudhuri Hocquenghem- Lempel Ziv Welch (BCH-LZW) lossless image compression scheme using Gaussian filter for image enhancement and noise reduction. In this paper, an efficient and effective lossless image compression technique based on LZW- BCH lossless image compression to reduce redundancies in the image was presented and image enhancement using Gaussian filter algorithm was demonstrated. Secondary method of data collection was used to collect the data. Standard research images were used to validate the new scheme. To achieve these, an object approach using Java net beans was used to develop the compression scheme. From the findings, it was revealed that the average compression ratio of the enhanced lossless image compression scheme was 1.6489 and the average bit per pixel was 5.416667. Gaussian filter image enhancement was used for noise reduction and the image was enhanced eight times the original.

The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


Author(s):  
Nassir H. Salman ◽  
Enas Kh. Hassan

Medical image compression is considered one of the most important research fields nowadays in biomedical applications. The majority of medical images must be compressed without loss because each pixel information is of great value. With the widespread use of applications concerning medical imaging in the health-care context and the increased significance in telemedicine technologies, it has become crucial to minimize both the storage and bandwidth requirements needed for archiving and transmission of medical imaging data, rather by employing means of lossless image compression algorithms. Furthermore, providing high resolution and image quality preservation of the processed image data has become of great benefit. The proposed system introduces a lossless image compression technique based on Run Length Encoding (RLE) that encodes the original magnetic resonance imaging (MRI) image into actual values and their numbers of occurrence. The actual image data values are separated from their runs and they are stored in a vector array. Lempel–Ziv–Welch (LZW) is used to provide further compression that is applied to values array only. Finally the Variable Length Coding (VLC) will be applied to code the values and runs arrays for the precise amount of bits adaptively into a binary file. These bit streams are reconstructed using inverse LZW of the values array and inverse RLE to reconstruct the input image. The obtained compression gain is enhanced by 25% after applying LZW to the values array.


Author(s):  
T Kavitha ◽  
K. Jayasankar

<p>Compression technique is adopted to solve various big data problems such as storage and transmission. The growth of cloud computing and smart phone industries has led to generation of huge volume of digital data. Digital data can be in various forms as audio, video, images and documents. These digital data are generally compressed and stored in cloud storage environment. Efficient storing and retrieval mechanism of digital data by adopting good compression technique will result in reducing cost. The compression technique is composed of lossy and lossless compression technique. Here we consider Lossless image compression technique, minimizing the number of bits for encoding will aid in improving the coding efficiency and high compression. Fixed length coding cannot assure in minimizing bit length. In order to minimize the bits variable Length codes with prefix-free codes nature are preferred. However the existing compression model presented induce high computing overhead, to address this issue, this work presents an ideal and efficient modified Huffman technique that improves compression factor up to 33.44% for Bi-level images and 32.578% for Half-tone Images. The average computation time both encoding and decoding shows an improvement of 20.73% for Bi-level images and 28.71% for Half-tone images. The proposed work has achieved overall 2% increase in coding efficiency, reduced memory usage of 0.435% for Bi-level images and 0.19% for Half-tone Images. The overall result achieved shows that the proposed model can be adopted to support ubiquitous access to digital data.</p>


Author(s):  
Hitesh H Vandra

Image compression is used to reduce bandwidth or storage requirement in image application. Mainly two types of image compression: lossy and lossless image compression. A Lossy Image Compression removes some of the source information content along with the redundancy. While the Lossless Image Compression technique the original source data is reconstructed from the compressed data by restoring the removed redundancy. The reconstructed data is an exact replica of the original source data. Many algorithms are present for lossless image compression like Huffman, rice coding, run length, LZW. LZW is referred to as a substitution or dictionary-based encoding algorithm. The algorithm builds a data dictionary of data occurring in an uncompressed data stream. Patterns of data (substrings) are identified in the data stream and are matched to entries in the dictionary. If the substring is not present in the dictionary, a code phrase is created based on the data content of the substring, and it is stored in the dictionary. The phrase is then written to the compressed output stream. In this paper we see the effect of LZW algorithm on the png, jpg, png, gif, bmp image formats.


1998 ◽  
Vol 120 (3) ◽  
pp. 463-470 ◽  
Author(s):  
Douglas P. Hart

With the development of Holographic PIV (HPIV) and PIV Cinematography (PIVC), the need for a computationally efficient algorithm capable of processing images at video rates has emerged. This paper presents one such algorithm, sparse array image correlation. This algorithm is based on the sparse format of image data—a format well suited to the storage of highly segmented images. It utilizes an image compression scheme that retains pixel values in high intensity gradient areas eliminating low information background regions. The remaining pixels are stored in sparse format along with their relative locations encoded into 32 bit words. The result is a highly reduced image data set that retains the original correlation information of the image. Compression ratios of 30:1 using this method are typical. As a result, far fewer memory calls and data entry comparisons are required to accurately determine tracer particle movement. In addition, by utilizing an error correlation function, pixel comparisons are made through single integer calculations eliminating time consuming multiplication and floating point arithmetic. Thus, this algorithm typically results in much higher correlation speeds and lower memory requirements than spectral and image shifting correlation algorithms. This paper describes the methodology of sparse array correlation as well as the speed, accuracy, and limitations of this unique algorithm. While the study presented here focuses on the process of correlating images stored in sparse format, the details of an image compression algorithm based on intensity gradient thresholding is presented and its effect on image correlation is discussed to elucidate the limitations and applicability of compression based PIV processing.


Sign in / Sign up

Export Citation Format

Share Document