scholarly journals Lossless Image Compression Schemes: A Review

Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression. In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods.

2010 ◽  
Vol 56 (4) ◽  
pp. 351-355
Author(s):  
Marcin Rodziewicz

Joint Source-Channel Coding in Dictionary Methods of Lossless Data Compression Limitations on memory and resources of communications systems require powerful data compression methods. Decompression of compressed data stream is very sensitive to errors which arise during transmission over noisy channels, therefore error correction coding is also required. One of the solutions to this problem is the application of joint source and channel coding. This paper contains a description of methods of joint source-channel coding based on the popular data compression algorithms LZ'77 and LZSS. These methods are capable of introducing some error resiliency into compressed stream of data without degradation of the compression ratio. We analyze joint source and channel coding algorithms based on these compression methods and present their novel extensions. We also present some simulation results showing usefulness and achievable quality of the analyzed algorithms.


Author(s):  
Phillip K.C. Tse

In the previous chapter, we see that the performance of a storage system depends on the amount of data being retrieved. The size of multimedia objects are however very large in size. Thus, the performance of the storage system can be enhanced if the object sizes are reduced. Therefore, multimedia objects are always compressed when they are stored. In addition, the performance of most subsystems depends on the amount of data being processed. Since multimedia objects are large in size, their accessing times are long. Thus, multimedia objects are always kept in their compressed form when they are being stored, retrieved, and processed. We shall describe the commonly used compression techniques and compression standards in this chapter. We first describe the general compression model in the next section. Then, we explain the techniques in compressing textual data. This is followed by the image compression techniques. In particular, we shall explain the JPEG2000 compression with details. Lastly, we explain the MPEG2 video compression standard. These compression techniques are helpful to understand the multimedia data being stored and retrieved.


2014 ◽  
Vol 1078 ◽  
pp. 370-374
Author(s):  
Wen Jing Zhao ◽  
Ming Jun Zhao ◽  
Jian Pan

Image compression is a data compression technology used in the digital image, its purpose is to reduce redundant information of the image data, and provide a more efficient format to store and transmit data. Due to the huge image data and the existing relatively low transport conditions, the image compression has become an inevitable. The key technology of image compression is how to transform image data, how to quantify image data, and how to entropy code the quantized data. Using two-dimensional Mallat image wavelet compression algorithm is a new method of image compression, and it is the core technology of the wavelet image compression.


2006 ◽  
Vol 3 (4) ◽  
pp. 722-728
Author(s):  
Baghdad Science Journal

There are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja


The extent of communicated information through internet has augmented speedily over the past few years. Image compression is the preeminent way to lessen the size of the image. JPEG is the one the best technique related to lossy image compression. In this paper a novel JPEG compression algorithm with Fuzzy-Morphology techniques was proposed. The efficacy of the proposed algorithm compared to JPEG is presented with metrics like PSNR, MSE, No of bits transmitted. The proposed approaches lessen the number of encoded bits as a result tumbling the quantity of memory needed. The Planned approaches are best appropriate for the images corrupted with Gaussian, Speckle, Poisson, Salt & Pepper noises. In this paper the effect of compression on classification performance was envisaged, Artificial Neural Network, Support Vector Machine, and, KNN classifiers performance is evaluated with original image data, standard JPEG compressed data and the compressed image data with the proposed method.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 159 ◽  
Author(s):  
Shinichi Yamagiwa ◽  
Eisaku Hayakawa ◽  
Koichi Marumo

Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 240
Author(s):  
Shinichi Yamagiwa ◽  
Koichi Marumo ◽  
Suzukaze Kuwabara

It is getting popular to implement an environment where communications are performed remotely among IoT edge devices, such as sensory devices and the cloud servers due to applying, for example, artificial intelligence algorithms to the system. In such situations that handle big data, lossless data compression is one of the solutions to reduce the big data. In particular, the stream-based data compression technology is focused on such systems to compress infinitely continuous data stream with very small delay. However, during the continuous data compression process, it is not able to insert an exception code among the compressed data without any additional mechanisms, such as data framing and the packeting technique, as used in networking technologies. The exception code indicates configurations for the compressor/decompressor and/or its peripheral logics. Then, it is used in real time for the configuration of parameters against those components. To implement the exception code, data compression algorithm must include a mechanism to distinguish original data before compression and the exception code clearly. However, the conventional algorithms do not include such mechanism. This paper proposes novel methods to implement the exception code in data compression that uses look-up table, called the exception symbol. Additionally, we describe implementation details of the method by applying it to algorithms of stream-based data compression. Because some of the proposed mechanisms need to reserve entries in the table, we also discuss the effect against data compression performance according to experimental evaluations.


1984 ◽  
Vol 5 (3) ◽  
pp. 225-239
Author(s):  
Bonifazi G. ◽  
Burrascano P.

A neural network approach for pattern classification has been explored in the present paper as part of the recent resurgence of interest in this area. Our research has focused on how a multilayer feedforward structure performs in the particular problem of particle characterization. The proposed procedure, after suitable data preprocessing, consists of two distinct phases: in the former, a feedforward neural network is used to obtain an image data compression. In the latter, a neural classifier is trained on the compressed data. All the tests have been conducted on a sample constituted by two different typologies of ceramic particles, each characterized by a different microstructure. The sample image of different particles acquired and directly digitalized by scanning electron microscopy has been processed in order to achieve the best conditions to obtain the boundary profile of each particle. The boundary is thus assumed to be representative of the morphological characteristics of the ceramic products. Using the neural approach, a classification accuracy as high as 100% on a training set of 80 sub-images was achieved. These networks correctly classified up to 96.9% of 64 testing patterns not contained in the training set.


2020 ◽  
Vol 18 (06) ◽  
pp. 2050031
Author(s):  
Albert No ◽  
Mikel Hernaez ◽  
Idoia Ochoa

The amount of sequencing data is growing at a fast pace due to a rapid revolution in sequencing technologies. Quality scores, which indicate the reliability of each of the called nucleotides, take a significant portion of the sequencing data. In addition, quality scores are more challenging to compress than nucleotides, and they are often noisy. Hence, a natural solution to further decrease the size of the sequencing data is to apply lossy compression to the quality scores. Lossy compression may result in a loss in precision, however, it has been shown that when operating at some specific rates, lossy compression can achieve performance on variant calling similar to that achieved with the losslessly compressed data (i.e. the original data). We propose Coding with Random Orthogonal Matrices for quality scores (CROMqs), the first lossy compressor designed for the quality scores with the “infinitesimal successive refinability” property. With this property, the encoder needs to compress the data only once, at a high rate, while the decoder can decompress it iteratively. The decoder can reconstruct the set of quality scores at each step with reduced distortion each time. This characteristic is specifically useful in sequencing data compression, since the encoder does not generally know what the most appropriate rate of compression is, e.g. for not degrading variant calling accuracy. CROMqs avoids the need of having to compress the data at multiple rates, hence incurring time savings. In addition to this property, we show that CROMqs obtains a comparable rate-distortion performance to the state-of-the-art lossy compressors. Moreover, we also show that it achieves a comparable performance on variant calling to that of the lossless compressed data while achieving more than 50% reduction in size.


2015 ◽  
Vol 9 (1) ◽  
pp. 097499 ◽  
Author(s):  
Nektarios Kranitis ◽  
Ioannis Sideris ◽  
Antonios Tsigkanos ◽  
Georgios Theodorou ◽  
Antonios Paschalis ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document