compression algorithms
Recently Published Documents


TOTAL DOCUMENTS

558
(FIVE YEARS 148)

H-INDEX

22
(FIVE YEARS 3)

Author(s):  
Gunasheela Keragodu Shivanna ◽  
Haranahalli Shreenivasamurthy Prasantha

Compressive sensing is receiving a lot of attention from the image processing research community as a promising technique for image recovery from very few samples. The modality of compressive sensing technique is very useful in the applications where it is not feasible to acquire many samples. It is also prominently useful in satellite imaging applications since it drastically reduces the number of input samples thereby reducing the storage and communication bandwidth required to store and transmit the data into the ground station. In this paper, an interior point-based method is used to recover the entire satellite image from compressive sensing samples. The compression results obtained are compared with the compression results from conventional satellite image compression algorithms. The results demonstrate the increase in reconstruction accuracy as well as higher compression rate in case of compressive sensing-based compression technique.


2021 ◽  
Vol 18 (4) ◽  
pp. 1-27
Author(s):  
Matthew Tomei ◽  
Shomit Das ◽  
Mohammad Seyedzadeh ◽  
Philip Bedoukian ◽  
Bradford Beckmann ◽  
...  

Cache-block compression is a highly effective technique for both reducing accesses to lower levels in the memory hierarchy (cache compression) and minimizing data transfers (link compression). While many effective cache-block compression algorithms have been proposed, the design of these algorithms is largely ad hoc and manual and relies on human recognition of patterns. In this article, we take an entirely different approach. We introduce a class of “byte-select” compression algorithms, as well as an automated methodology for generating compression algorithms in this class. We argue that, based on upper bounds within the class, the study of this class of byte-select algorithms has potential to yield algorithms with better performance than existing cache-block compression algorithms. The upper bound we establish on the compression ratio is 2X that of any existing algorithm. We then offer a generalized representation of a subset of byte-select compression algorithms and search through the resulting space guided by a set of training data traces. Using this automated process, we find efficient and effective algorithms for various hardware applications. We find that the resulting algorithms exploit novel patterns that can inform future algorithm designs. The generated byte-select algorithms are evaluated against a separate set of traces and evaluations show that Byte-Select has a 23% higher compression ratio on average. While no previous algorithm performs best for all our data sets which include CPU and GPU applications, our generated algorithms do. Using an automated hardware generator for these algorithms, we show that their decompression and compression latency is one and two cycles respectively, much lower than any existing algorithm with a competitive compression ratio.


Algorithms ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 359
Author(s):  
Dmitry Kosolobov ◽  
Daniel Valenzuela

The Lempel-Ziv parsing (LZ77) is a widely popular construction lying at the heart of many compression algorithms. These algorithms usually treat the data as a sequence of bytes, i.e., blocks of fixed length 8. Another common option is to view the data as a sequence of bits. We investigate the following natural question: what is the relationship between the LZ77 parsings of the same data interpreted as a sequence of fixed-length blocks and as a sequence of bits (or other “elementary” letters)? In this paper, we prove that, for any integer b>1, the number z of phrases in the LZ77 parsing of a string of length n and the number zb of phrases in the LZ77 parsing of the same string in which blocks of length b are interpreted as separate letters (e.g., b=8 in case of bytes) are related as zb=O(bzlognz). The bound holds for both “overlapping” and “non-overlapping” versions of LZ77. Further, we establish a tight bound zb=O(bz) for the special case when each phrase in the LZ77 parsing of the string has a “phrase-aligned” earlier occurrence (an occurrence equal to the concatenation of consecutive phrases). The latter is an important particular case of parsing produced, for instance, by grammar-based compression methods.


Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8275
Author(s):  
Gang Liu ◽  
Lei Jia ◽  
Taishan Hu ◽  
Fangming Deng ◽  
Zheng Chen ◽  
...  

For the problem of data accumulation caused by massive sensor data in transmission line condition monitoring system, this paper analyzes the type and amount of data in the transmission line sensor network, compares the compression algorithms of wireless sensor network data at home and abroad, and proposes an efficient lossless compression algorithm suitable for sensor data in transmission line linear heterogeneous networks. The algorithm combines the wavelet compression algorithm and the neighborhood index sequence algorithm. It displays a fast operation speed and requires a small amount of calculation. It is suitable for battery powered wireless sensor network nodes. By combining wavelet correlation analysis and neighborhood index sequence coding, the compression algorithm proposed in this paper can achieve a high compression rate, has strong robustness to packet loss, has high compression performance, and can help to reduce network load and the packet loss rate. Simulation results show that the proposed method achieves a high compression rate in the compression of the transmission line parameter dataset, is superior to the existing data compression algorithms, and is suitable for the compression and transmission of transmission line condition monitoring data.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Minhyeok Cho ◽  
Albert No

Abstract Background Advances in sequencing technology have drastically reduced sequencing costs. As a result, the amount of sequencing data increases explosively. Since FASTQ files (standard sequencing data formats) are huge, there is a need for efficient compression of FASTQ files, especially quality scores. Several quality scores compression algorithms are recently proposed, mainly focused on lossy compression to boost the compression rate further. However, for clinical applications and archiving purposes, lossy compression cannot replace lossless compression. One of the main challenges for lossless compression is time complexity, where it takes thousands of seconds to compress a 1 GB file. Also, there are desired features for compression algorithms, such as random access. Therefore, there is a need for a fast lossless compressor with a reasonable compression rate and random access functionality. Results This paper proposes a Fast and Concurrent Lossless Quality scores Compressor (FCLQC) that supports random access and achieves a lower running time based on concurrent programming. Experimental results reveal that FCLQC is significantly faster than the baseline compressors on compression and decompression at the expense of compression ratio. Compared to LCQS (baseline quality score compression algorithm), FCLQC shows at least 31x compression speed improvement in all settings, where a performance degradation in compression ratio is up to 13.58% (8.26% on average). Compared to general-purpose compressors (such as 7-zip), FCLQC shows 3x faster compression speed while having better compression ratios, at least 2.08% (4.69% on average). Moreover, the speed of random access decompression also outperforms the others. The concurrency of FCLQC is implemented using Rust; the performance gain increases near-linearly with the number of threads. Conclusion The superiority of compression and decompression speed makes FCLQC a practical lossless quality score compressor candidate for speed-sensitive applications of DNA sequencing data. FCLQC is available at https://github.com/Minhyeok01/FCLQC and is freely available for non-commercial usage.


2021 ◽  
Vol 27 (4) ◽  
pp. 04021042
Author(s):  
Joshua Qiang Li ◽  
Kelvin C. P. Wang ◽  
Guangwei Yang

Author(s):  
Rosario Gilmary ◽  
Murugesan G

Deoxyribonucleic acid called DNA is the smallest fundamental unit that bears the genetic instructions of a living organism. It is used in the up growth and functioning of all known living organisms. Current DNA sequencing equipment creates extensive heaps of genomic data. The Nucleotide databases like GenBank, size getting 2 to 3 times larger annually. The increase in genomic data outstrips the increase in storage capacity. Massive amount of genomic data needs an effectual depository, quick transposal and preferable performance. To reduce storage of abundant data and data storage expense, compression algorithms were used. Typical compression approaches lose status while compressing these sequences. However, novel compression algorithms have been introduced for better compression ratio. The performance is correlated in terms of compression ratio; ratio of the capacity of compressed file and compression/decompression time; time taken to compress/decompress the sequence. In the proposed work, the input DNA sequence is compressed by reconstructing the sequence into varied formats. Here the input DNA sequence is subjected to bit reduction. The binary output is converted to hexadecimal format followed by encoding. Thus, the compression ratio of the biological sequence is improved.


2021 ◽  
pp. 475-488
Author(s):  
Calvin Rodrigues ◽  
E. M. Jishnu ◽  
Chandu R. Nair ◽  
M. Soumya Krishnan

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Harsimranjit Singh Gill ◽  
Tarandip Singh ◽  
Baldeep Kaur ◽  
Gurjot Singh Gaba ◽  
Mehedi Masud ◽  
...  

Media streaming falls into the category of Big Data. Regardless of the video duration, an enormous amount of information is encoded in accordance with standardized algorithms of videos. In the transmission of videos, the intended recipient is allowed to receive a copy of the broadcasted video; however, the adversary also has access to it which poses a serious concern to the data confidentiality and availability. In this paper, a cryptographic algorithm, Advanced Encryption Standard, is used to conceal the information from malicious intruders. However, in order to utilize fewer system resources, video information is compressed before its encryption. Various compression algorithms such as Discrete Cosine Transform, Integer Wavelet transforms, and Huffman coding are employed to reduce the enormous size of videos. moving picture expert group is a standard employed in video broadcasting, and it constitutes of different frame types, viz., I, B, and P frames. Later, two frame types carry similar information as of foremost type. Even I frame is to be processed and compressed with the abovementioned schemes to discard any redundant information from it. However, I frame embraces an abundance of new information; thus, encryption of this frame is sufficient enough to safeguard the whole video. The introduction of various compression algorithms can further increase the encryption time of one frame. The performance parameters such as PSNR and compression ratio are examined to further analyze the proposed model’s effectiveness. Therefore, the presented approach has superiority over the other schemes when the speed of encryption and processing of data are taken into consideration. After the reversal of the complete system, we have observed no major impact on the quality of the deciphered video. Simulation results ensure that the presented architecture is an efficient method for enciphering the video information.


Sign in / Sign up

Export Citation Format

Share Document