High Throughput Compression of Double-Precision Floating-Point Data

Author(s):  
Martin Burtscher ◽  
Paruj Ratanaworabhan
2016 ◽  
Author(s):  
Charles S. Zender

Abstract. Lossy compression schemes can help reduce the space required to store the false precision (i.e, scientifically meaningless data bits) that geoscientific models and measurements generate. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving which quantizes values solely by zeroing bits. Our variation eliminates the artificial low-bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression schemes to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by uncompressed and compressed climate data by up to 50 % and 20 %, respectively, for single-precision data (the most common case for climate data). When used aggressively (i.e., preserving only 1–3 decimal digits of precision), Bit Grooming produces storage reductions comparable to other quantization techniques such as linear packing. Unlike linear packing, Bit Grooming works on the full representable range of floating-point data. Bit Grooming reduces the volume of single-precision compressed data by roughly 10 % per decimal digit quantized (or "groomed") after the third such digit, up to a maximum reduction of about 50 %. The potential reduction is greater for double-precision datasets. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.


2009 ◽  
Vol 58 (1) ◽  
pp. 18-31 ◽  
Author(s):  
Martin Burtscher ◽  
Paruj Ratanaworabhan

2021 ◽  
Author(s):  
Meiyu Xu ◽  
Dayong Lu ◽  
Xiaoyun Sun

Abstract In the past few decades, quantum computation has become increasingly attractivedue to its remarkable performance. Quantum image scaling is considered a common geometric transformation in quantum image processing, however, the quantum floating-point data version of which does not exist. Is there a corresponding scaling for 2-D and 3-D floating-point data? The answer is yes.In this paper, we present quantum scaling up and down scheme for floating-point data by using trilinear interpolation method in 3-D space. This scheme offers better performance (in terms of the precision of floating-point numbers) for realizing the quantum floating-point algorithms compared to previously classical approaches. The Converter module we proposed can solve the conversion of fixed-point numbers to floating-point numbers of arbitrary size data with p + q qubits based on IEEE-754 format, instead of 32-bit single-precision, 64-bit double precision or 128-bit extended-precision. Usually, we use nearest neighbor interpolation and bilinear interpolation to achieve quantum image scaling algorithms, which are not applicable in high-dimensional space. This paper proposes trilinear interpolation of floating-point numbers in 3-D space to achieve quantum algorithms of scaling up and down for 3-D floating-point data. Finally, the circuits of quantum scaling up and down for 3-D floating-point data are designed.


2009 ◽  
Vol 29-1 (1) ◽  
pp. 49-49
Author(s):  
Kentaro SANO ◽  
Kazuya KATAHIRA ◽  
Satotu YAMAMOTO

2013 ◽  
Vol 694-697 ◽  
pp. 1093-1097
Author(s):  
Zhao Xue ◽  
Liu Quan ◽  
Xiao Fei Wang

This article discusses one-dimensional Kalman filter algorithm using FPGA hardware IP core implementation process. First of all, to program the FPGA matrix operations, implementation of double precision floating point. Then the Kalman filter algorithm programmed in MATLAB, to verify the correctness of the algorithm thinking. Finally the MATLAB language programming algorithm is converted into VHDL language. And call 64 a double precision floating point data algorithm realizes the design of 1-D Kalman filtration algorithm IP core, which make the Kalman filter meet the high precision as well as high speed to complete complex algorithm.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 727
Author(s):  
Eric J. Ma ◽  
Arkadij Kummer

We present a case study applying hierarchical Bayesian estimation on high-throughput protein melting-point data measured across the tree of life. We show that the model is able to impute reasonable melting temperatures even in the face of unreasonably noisy data. Additionally, we demonstrate how to use the variance in melting-temperature posterior-distribution estimates to enable principled decision-making in common high-throughput measurement tasks, and contrast the decision-making workflow against simple maximum-likelihood curve-fitting. We conclude with a discussion of the relative merits of each workflow.


Sign in / Sign up

Export Citation Format

Share Document