Compression Used in Qubits Distillation

2014 ◽  
Vol 668-669 ◽  
pp. 1243-1246
Author(s):  
Feng Zhi Li ◽  
Bo Zhong ◽  
Lei Cao ◽  
Chao Ze Wang ◽  
Zhi Di Jiang ◽  
...  

Qubits distillation is used to extract key in quantum cryptography. However, it may transmit much data when two parities distill the keys via the synchronous light time and keys’ pulse position. In order to reduce the transmitted data, we compress the information by arithmetic code. In this paper, an improved qubits distillation has been implemented. It is shown through experiment results that the data compression rate of arithmetic code is about 0.13; the transmitted data with compression is less than that without compression. It is suitable for us to distill the qubits with the narrow bandwidth.

Author(s):  
Gody Mostafa ◽  
Abdelhalim Zekry ◽  
Hatem Zakaria

When transmitting the data in digital communication, it is well desired that the transmitting data bits should be as minimal as possible, so many techniques are used to compress the data. In this paper, a Lempel-Ziv algorithm for data compression was implemented through VHDL coding. One of the most lossless data compression algorithms commonly used is Lempel-Ziv. The work in this paper is devoted to improve the compression rate, space-saving, and utilization of the Lempel-Ziv algorithm using a systolic array approach. The developed design is validated with VHDL simulations using Xilinx ISE 14.5 and synthesized on Virtex-6 FPGA chip. The results show that our design is efficient in providing high compression rates and space-saving percentage as well as improved utilization. The Throughput is increased by 50% and the design area is decreased by more than 23% with a high compression ratio compared to comparable previous designs.


Author(s):  
Guilherme Coelho da Silva Stanisce Corrêa ◽  
Rogério Pirk ◽  
Marcelo da Silva Pinho

The field of data compression has evolved over the last decades. In this way, several techniques to reduce the amount of acquired data from the sensor required to be transmitted have been developed. Those techniques are usually classified by lossless or lossy, where, for the lossless techniques, all acquired data is recovered, while the lossy techniques introduce errors to these data. Each of these techniques presents advantages and drawbacks, being the analyst responsible for choosing the appropriate technique for a specific application. This work presents a comparative study using lossy audio formats to be applied on a launch vehicle on-board acoustic data. The Opus format achieved a higher compression rate in comparison with standard compression techniques by saving up to 254 times the required amount of data to be transmitted through a telemetry link on launcher vehicle, and the lowest discrepancy from original data measured by the mean square error metric.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Akira Kawai ◽  
Takahiro Kageyama ◽  
Ryoichi Horisaki ◽  
Takuro Ideguchi

AbstractBroadband, high resolution, and rapid measurements of dual-comb spectroscopy (DCS) generate a large amount of data stream. We numerically demonstrate significant data compression of DCS spectra by using a compressive sensing technique. Our numerical simulation shows a compression rate of more than 100 with a 3% error in mole fraction estimation of mid-infrared (MIR) DCS of two molecular species in a broadband (~ 30 THz) and high resolution (~ 115 MHz) condition. We also numerically demonstrate a massively parallel MIR DCS spectrum of 10 different molecular species can be reconstructed with a compression rate of 10.5 with a transmittance error of 0.003 from the original spectrum.


1998 ◽  
Vol 12 (2) ◽  
pp. 189-210
Author(s):  
Ilan Sadeh

The paper treats data compression from the viewpoint of probability theory where a certain error probability is tolerable. We obtain bounds for the minimal rate given an error probability for blockcoding of general stationary ergodic sources. An application of the theory of large deviations provides numerical methods to compute for memoryless sources, the minimal compression rate given a tolerable error probability. Interesting connections between Cramer's functions and Shannon's theory for lossy coding are found.


2018 ◽  
Author(s):  
Andysah Putera Utama Siahaan

Compression is significant in data storage. Existing data is in desperate need of a reduction technique so that the data can be stored efficiently. Compression is useful because it helps reduce the use of hard disk storage. Also, compression is useful for reducing transmission costs for sending data on the network. There are many methods used to do compression. This study contains a comparison of two methods of compression. The Elias Delta and Unary algorithms are the two algorithms chosen to compare in this study. Both of these algorithms have different work processes. The results of both are also very competitive in data compression. In different data, the Elias Delta algorithm is superior in storing space during compression, and the Unary algorithm is weak in compression. On the other hand, the Unary algorithm is superior in storing data, while Elias Delta algorithm loses in storing data. The result of both algorithms has different performance using different data.


2008 ◽  
Vol E91-D (3) ◽  
pp. 726-735 ◽  
Author(s):  
M. ARAI ◽  
S. FUKUMOTO ◽  
K. IWASAKI ◽  
T. MATSUO ◽  
T. HIRAIDE ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document