scholarly journals Scaling Up and Down of 3-D Floating-point Data in Quantum Computation

Author(s):  
Meiyu Xu ◽  
Dayong Lu ◽  
Xiaoyun Sun

Abstract In the past few decades, quantum computation has become increasingly attractivedue to its remarkable performance. Quantum image scaling is considered a common geometric transformation in quantum image processing, however, the quantum floating-point data version of which does not exist. Is there a corresponding scaling for 2-D and 3-D floating-point data? The answer is yes.In this paper, we present quantum scaling up and down scheme for floating-point data by using trilinear interpolation method in 3-D space. This scheme offers better performance (in terms of the precision of floating-point numbers) for realizing the quantum floating-point algorithms compared to previously classical approaches. The Converter module we proposed can solve the conversion of fixed-point numbers to floating-point numbers of arbitrary size data with p + q qubits based on IEEE-754 format, instead of 32-bit single-precision, 64-bit double precision or 128-bit extended-precision. Usually, we use nearest neighbor interpolation and bilinear interpolation to achieve quantum image scaling algorithms, which are not applicable in high-dimensional space. This paper proposes trilinear interpolation of floating-point numbers in 3-D space to achieve quantum algorithms of scaling up and down for 3-D floating-point data. Finally, the circuits of quantum scaling up and down for 3-D floating-point data are designed.

2016 ◽  
Author(s):  
Charles S. Zender

Abstract. Lossy compression schemes can help reduce the space required to store the false precision (i.e, scientifically meaningless data bits) that geoscientific models and measurements generate. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving which quantizes values solely by zeroing bits. Our variation eliminates the artificial low-bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression schemes to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by uncompressed and compressed climate data by up to 50 % and 20 %, respectively, for single-precision data (the most common case for climate data). When used aggressively (i.e., preserving only 1–3 decimal digits of precision), Bit Grooming produces storage reductions comparable to other quantization techniques such as linear packing. Unlike linear packing, Bit Grooming works on the full representable range of floating-point data. Bit Grooming reduces the volume of single-precision compressed data by roughly 10 % per decimal digit quantized (or "groomed") after the third such digit, up to a maximum reduction of about 50 %. The potential reduction is greater for double-precision datasets. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.


2009 ◽  
Vol 58 (1) ◽  
pp. 18-31 ◽  
Author(s):  
Martin Burtscher ◽  
Paruj Ratanaworabhan

2021 ◽  
Vol 37 (2) ◽  
pp. 355-360
Author(s):  
RADU T. TRÎMBIŢAŞ

We study the strange behavior in floating-point arithmetic of a function proposed by Nicholas Higham, consisting of repeated square roots extraction followed by the same number of times squaring and find its fixpoints. For IEEE standard double precision floating point numbers the fixpoints have the form \[ x \in \left\{\left( 1+k\mathrm{eps}\right) ^{\frac{1}{\mathrm{eps}}},\quad k=\left[ -745:\frac{1}{2}:-\frac{1}{2},0:709\right]\right\} \cup \{0\} , \] where \mathrm{eps} is the machine epsilon."


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 165740-165747
Author(s):  
Juwon Yun ◽  
Jinyoung Lee ◽  
Woo-Nam Chung ◽  
Cheong Ghil Kim ◽  
Woo-Chan Park

Sign in / Sign up

Export Citation Format

Share Document