compression scheme
Recently Published Documents


TOTAL DOCUMENTS

783
(FIVE YEARS 136)

H-INDEX

29
(FIVE YEARS 2)

2022 ◽  
Vol 21 (1) ◽  
pp. 1-27
Author(s):  
Albin Eldstål-Ahrens ◽  
Angelos Arelakis ◽  
Ioannis Sourdis

In this article, we introduce L 2 C, a hybrid lossy/lossless compression scheme applicable both to the memory subsystem and I/O traffic of a processor chip. L 2 C employs general-purpose lossless compression and combines it with state-of-the-art lossy compression to achieve compression ratios up to 16:1 and to improve the utilization of chip’s bandwidth resources. Compressing memory traffic yields lower memory access time, improving system performance, and energy efficiency. Compressing I/O traffic offers several benefits for resource-constrained systems, including more efficient storage and networking. We evaluate L 2 C as a memory compressor in simulation with a set of approximation-tolerant applications. L 2 C improves baseline execution time by an average of 50% and total system energy consumption by 16%. Compared to the lossy and lossless current state-of-the-art memory compression approaches, L 2 C improves execution time by 9% and 26%, respectively, and reduces system energy costs by 3% and 5%, respectively. I/O compression efficacy is evaluated using a set of real-life datasets. L 2 C achieves compression ratios of up to 10.4:1 for a single dataset and on average about 4:1, while introducing no more than 0.4% error.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 81
Author(s):  
Jie Han ◽  
Tao Guo ◽  
Qiaoqiao Zhou ◽  
Wei Han ◽  
Bo Bai ◽  
...  

With the rapid expansion of graphs and networks and the growing magnitude of data from all areas of science, effective treatment and compression schemes of context-dependent data is extremely desirable. A particularly interesting direction is to compress the data while keeping the “structural information” only and ignoring the concrete labelings. Under this direction, Choi and Szpankowski introduced the structures (unlabeled graphs) which allowed them to compute the structural entropy of the Erdos–Rényi random graph model. Moreover, they also provided an asymptotically optimal compression algorithm that (asymptotically) achieves this entropy limit and runs in expectation in linear time. In this paper, we consider the stochastic block models with an arbitrary number of parts. Indeed, we define a partitioned structural entropy for stochastic block models, which generalizes the structural entropy for unlabeled graphs and encodes the partition information as well. We then compute the partitioned structural entropy of the stochastic block models, and provide a compression scheme that asymptotically achieves this entropy limit.


2022 ◽  
pp. 127901
Author(s):  
Yuhang Chen ◽  
Chongfu Zhang ◽  
Mengwei Cui ◽  
Yufeng Luo ◽  
Tingwei Wu ◽  
...  

2022 ◽  
Vol 32 (2) ◽  
pp. 841-857
Author(s):  
Arwa Mashat ◽  
Surbhi Bhatia ◽  
Ankit Kumar ◽  
Pankaj Dadheech ◽  
Aliaa Alabdali

Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Image compression refers to the process of encoding image using fewer number of bits. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers’ cases. This paper titled Enhanced Lossless Image Compression Scheme is aimed at providing an enhanced lossless image compression scheme based on Bose, Chaudhuri Hocquenghem- Lempel Ziv Welch (BCH-LZW) lossless image compression scheme using Gaussian filter for image enhancement and noise reduction. In this paper, an efficient and effective lossless image compression technique based on LZW- BCH lossless image compression to reduce redundancies in the image was presented and image enhancement using Gaussian filter algorithm was demonstrated. Secondary method of data collection was used to collect the data. Standard research images were used to validate the new scheme. To achieve these, an object approach using Java net beans was used to develop the compression scheme. From the findings, it was revealed that the average compression ratio of the enhanced lossless image compression scheme was 1.6489 and the average bit per pixel was 5.416667. Gaussian filter image enhancement was used for noise reduction and the image was enhanced eight times the original.


2021 ◽  
Author(s):  
Fabio Cunial ◽  
Olgert Denas ◽  
Djamal Belazzougui

Fast, lightweight methods for comparing the sequence of ever larger assembled genomes from ever growing databases are increasingly needed in the era of accurate long reads and pan-genome initiatives. Matching statistics is a popular method for computing whole-genome phylogenies and for detecting structural rearrangements between two genomes, since it is amenable to fast implementations that require a minimal setup of data structures. However, current implementations use a single core, take too much memory to represent the result, and do not provide efficient ways to analyze the output in order to explore local similarities between the sequences. We develop practical tools for computing matching statistics between large-scale strings, and for analyzing its values, faster and using less memory than the state of the art. Specifically, we design a parallel algorithm for shared-memory machines that computes matching statistics 30 times faster with 48 cores in the cases that are most difficult to parallelize. We design a lossy compression scheme that shrinks the matching statistics array to a bitvector that takes from 0.8 to 0.2 bits per character, depending on the dataset and on the value of a threshold, and that achieves 0.04 bits per character in some variants. And we provide efficient implementations of range-maximum and range-sum queries that take a few tens of milliseconds while operating on our compact representations, and that allow computing key local statistics about the similarity between two strings. Our toolkit makes construction, storage, and analysis of matching statistics arrays practical for multiple pairs of the largest genomes available today, possibly enabling new applications in comparative genomics.


2021 ◽  
Author(s):  
Yucheng Zhang ◽  
Hong Jiang ◽  
Mengtian Shi ◽  
Chunzhi Wang ◽  
Nan Jiang ◽  
...  

Author(s):  
Bushra A. Sultan ◽  
Loay E. George

<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.</p>


2021 ◽  
Vol 4 ◽  
Author(s):  
Reza Abbasi-Asl ◽  
Bin Yu

Deep convolutional neural networks (CNNs) have been successful in many tasks in machine vision, however, millions of weights in the form of thousands of convolutional filters in CNNs make them difficult for human interpretation or understanding in science. In this article, we introduce a greedy structural compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. The compression is based on pruning filters with the least contribution to the classification accuracy or the lowest Classification Accuracy Reduction (CAR) importance index. We demonstrate the interpretability of CAR-compressed CNNs by showing that our algorithm prunes filters with visually redundant functionalities such as color filters. These compressed networks are easier to interpret because they retain the filter diversity of uncompressed networks with an order of magnitude fewer filters. Finally, a variant of CAR is introduced to quantify the importance of each image category to each CNN filter. Specifically, the most and the least important class labels are shown to be meaningful interpretations of each filter.


Sign in / Sign up

Export Citation Format

Share Document