lossless data compression
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 46)

H-INDEX

18
(FIVE YEARS 3)

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Parameshwaran Ramalingam ◽  
Abolfazl Mehbodniya ◽  
Julian L. Webber ◽  
Mohammad Shabaz ◽  
Lakshminarayanan Gopalakrishnan

Telemetric information is great in size, requiring extra room and transmission time. There is a significant obstruction of storing or sending telemetric information. Lossless data compression (LDC) algorithms have evolved to process telemetric data effectively and efficiently with a high compression ratio and a short processing time. Telemetric information can be packed to control the extra room and association data transmission. In spite of the fact that different examinations on the pressure of telemetric information have been conducted, the idea of telemetric information makes pressure incredibly troublesome. The purpose of this study is to offer a subsampled and balanced recurrent neural lossless data compression (SB-RNLDC) approach for increasing the compression rate while decreasing the compression time. This is accomplished through the development of two models: one for subsampled averaged telemetry data preprocessing and another for BRN-LDC. Subsampling and averaging are conducted at the preprocessing stage using an adjustable sampling factor. A balanced compression interval (BCI) is used to encode the data depending on the probability measurement during the LDC stage. The aim of this research work is to compare differential compression techniques directly. The final output demonstrates that the balancing-based LDC can reduce compression time and finally improve dependability. The final experimental results show that the model proposed can enhance the computing capabilities in data compression compared to the existing methodologies.


Author(s):  
Sanjana Rao ◽  
Vidyashree T S ◽  
Manasa M ◽  
Bindushree V ◽  
C. Gururaj

2021 ◽  
pp. 391-410
Author(s):  
Shinichi Yamagiwa

AbstractIn this chapter, we introduce aspects of applying data-compression techniques. First, we study the background of recent communication data paths. The focus of this chapter is a fast lossless data-compression mechanism that handles data streams completely. A data stream comprises continuous data with no termination of the massive data generated by sources such as movies and sensors. In this chapter, we introduce LCA-SLT and LCA-DLT, which accept the data streams, as well as several implementations of these stream-based compression techniques. We also show optimization techniques for optimal implementation in hardware.


Author(s):  
I. Manga ◽  
E. J. Garba ◽  
A. S. Ahmadu

Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression. In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4602
Author(s):  
Shinichi Yamagiwa ◽  
Yuma Ichinomiya

Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.


Author(s):  
Gody Mostafa ◽  
Abdelhalim Zekry ◽  
Hatem Zakaria

When transmitting the data in digital communication, it is well desired that the transmitting data bits should be as minimal as possible, so many techniques are used to compress the data. In this paper, a Lempel-Ziv algorithm for data compression was implemented through VHDL coding. One of the most lossless data compression algorithms commonly used is Lempel-Ziv. The work in this paper is devoted to improve the compression rate, space-saving, and utilization of the Lempel-Ziv algorithm using a systolic array approach. The developed design is validated with VHDL simulations using Xilinx ISE 14.5 and synthesized on Virtex-6 FPGA chip. The results show that our design is efficient in providing high compression rates and space-saving percentage as well as improved utilization. The Throughput is increased by 50% and the design area is decreased by more than 23% with a high compression ratio compared to comparable previous designs.


2021 ◽  
Vol 45 (1) ◽  
pp. 329-349
Author(s):  
Branslav Mados ◽  
Zuzana Bilanová ◽  
Ján Hurtuk

Lossless data compression algorithms can use statistical redundancy to represent data using a fewer number of bits in comparison to the original uncompressed data. Run-Length Encoding (RLE) is one of the simplest lossless compression algorithms in terms of understanding its principles and software implementation, as well as in terms of temporal and spatial complexity. If this principle is applied to individual bits of original uncompressed data without respecting the byte boundaries, this approach is referred to as bit-level Run-Length Encoding. Lightweight algorithm for lossless data compression proposed in this paper optimizes bit-level RLE data compression, uses special encoding of repeating data blocks, and, if necessary, combines it with delta data transformation or representation of data in its original form intending to increase compression efficiency compared to a conventional bit-level RLE approach. The advantage of the algorithm proposed in this paper is in its low time and memory consumption which are basic features of RLE, along with the simultaneous increase of compression ratio, compared to the classical bit-level RLE approach.


Sign in / Sign up

Export Citation Format

Share Document