additive error
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 16)

H-INDEX

6
(FIVE YEARS 2)

Author(s):  
А.А. ПАВЛОВ ◽  
Ю.А. РОМАНЕНКО ◽  
А.Н. ЦАРЬКОВ ◽  
А.Ю. РОМАНЕНКО ◽  
А.А. МИХЕЕВ

Обоснована необходимость разработки методического аппарата, связанного с построением кода, корректирующего ошибки в заданном числе байтов информации с алгебраическим синдромным декодированием и оценкой аппаратурных и временных затрат, связанных с этой целью. Представлены правила построения корректирующего кода, исправляющего ошибки в заданном числе байтов информации, реализующего линейную процедуру построения корректирующего кода с синдромным декодированием и использованием аддитивного вектора ошибок, что позволило сократить аппаратурные затраты на построение декодирующего устройства (сократить объем памяти для хранения значений векторов ошибок). Получены выражения для оценки аппаратурных затрат на кодирование и декодирование информации при использовании предлагаемого метода коррекции пакетных ошибок. The necessity of developing a methodological apparatus related to the construction of a code that corrects errors in a given number of bytes of information with algebraic syndrome decoding and the estimation of hardware and time costs associated with this purpose is justified. The rules for constructing a correction code that corrects errors in a given number of bytes of information, implementing a linear procedure for constructing a correction code with syndrome decoding and using an additive error vector, are presented. This method made it possible to reduce the hardware costs for constructing a decoding device (reducing the amount of memory for storing the values of error vectors). Expressions are obtained for estimating the hardware costs of encoding and decoding information when using the proposed method of correcting packet errors.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-27
Author(s):  
Debajyoti Bera ◽  
Sapv Tharrmashastha

Non-linearity of a Boolean function indicates how far it is from any linear function. Despite there being several strong results about identifying a linear function and distinguishing one from a sufficiently non-linear function, we found a surprising lack of work on computing the non-linearity of a function. The non-linearity is related to the Walsh coefficient with the largest absolute value; however, the naive attempt of picking the maximum after constructing a Walsh spectrum requires Θ (2 n ) queries to an n -bit function. We improve the scenario by designing highly efficient quantum and randomised algorithms to approximate the non-linearity allowing additive error, denoted λ, with query complexities that depend polynomially on λ. We prove lower bounds to show that these are not very far from the optimal ones. The number of queries made by our randomised algorithm is linear in n , already an exponential improvement, and the number of queries made by our quantum algorithm is surprisingly independent of n . Our randomised algorithm uses a Goldreich-Levin style of navigating all Walsh coefficients and our quantum algorithm uses a clever combination of Deutsch-Jozsa, amplitude amplification and amplitude estimation to improve upon the existing quantum versions of the Goldreich-Levin technique.


2021 ◽  
Vol 52 (1) ◽  
pp. 37-50
Author(s):  
Yacob Abrehe Zereyesus ◽  
Allen M. Featherstone ◽  
Michael R. Langemeier
Keyword(s):  

Author(s):  
A.A. Pavlov ◽  
Yu.A. Romanenko ◽  
A.N. Tsarkov ◽  
A. Yu. Romanenko ◽  
A.A. Mikheev

In digital data transmission systems, to improve noise immunity, cyclic codes are widely used, detecting and correcting byte (packet) errors. An error packet is understood to mean errors whose multiplicity does not exceed the number of bits b of the information block. Cyclic codes are used to correct byte errors. The most effective method for correcting byte errors are Reed-Solomon codes, which allow correcting errors in a given number of bytes of information. The main problem of using cyclic (sequential) codes is a long delay time associated with the need to perform a division operation to obtain the remainder, which is not acceptable for digital data transmission systems operating in real time. For example, when using the Reed-Solomon code with a code set length of 69 information bits, the implementation of decoding according to the Euclidean algorithm requires 96 clock cycles, which cannot ensure the channel operation in real time. To eliminate this drawback, one should use codes that correct burst errors that implement an algebraic coding procedure with syndromic decoding of information. However, replacing the cyclic procedure for encoding (decoding) information with a syndromic one leads to a sharp increase in hardware costs associated with the use of a memory unit in the decoder for storing error vector values and a decoder for generating error addresses in accordance with the resulting syndrome. Thus, there is a need to develop a methodological apparatus associated with the construction of a code that corrects errors in a given number of bytes of information and an estimate of the hardware and time costs associated with this purpose. In this work, the need to develop a methodological apparatus associated with the construction of a code that corrects errors in a given number of bytes of information with algebraic syndromic decoding and an assessment of the hardware and time costs associated with this purpose is substantiated. The paper presents the rules for constructing a correcting code that corrects errors in a given number of bytes of information, which implements a linear procedure for constructing a correcting code with syndromic decoding and using an additive error vector, which made it possible to reduce hardware costs for constructing a decoding device (to reduce the amount of memory for storing error vector values). For the developed method for correcting byte errors, expressions for evaluating the number are obtained: checking discharges; additive error vectors that do not require their storage in a memory block; error vectors, for burst errors that occur in adjacent bytes at the same time and require their values to be stored in a memory block. A comparative assessment of hardware and time redundancy in the implementation of the proposed method for correcting packet errors with existing methods is carried out. The proposed method of error correction in a given number of bytes of information with additive formation of the error vector differs from the existing ones in that it allows: carry out the correction of burst errors with algebraic-syndromic decoding (exclude the cyclical procedure for encoding and decoding information); to reduce hardware costs for building a decoding device, since in most cases does not require hardware costs for storing error vectors; to reduce the time spent on encoding and decoding information and to ensure the operation of the data transmission channel in real time; to increase the reliability of the transmitted information by detecting uncorrectable byte errors. Thus, the proposed method for correcting errors in a given number of bytes of information with additive formation of an error vector has a regular and relatively simple procedure for constructing a code, which allows one to reduce the hardware and time costs for encoding and decoding information.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


2020 ◽  
pp. 1831
Author(s):  
Abbas Zedan Khalaf ◽  
Bashar H Alyasery

In this study, an approach inspired by a standardized calibration method was used to test a laser distance meter (LDM). A laser distance sensor (LDS) was tested with respect to an LDM and then a statistical indicator explained that the former functions in a similar manner as the latter. Also, regression terms were used to estimate the additive error and scale the correction of the sensors. The specified distance was divided into several parts with percent of longest one and observed using two sensors, left and right. These sensors were evaluated by using the regression between the measured and the reference values. The results were computed using MINITAB 17 package software and excel office package. The accuracy of the results in this work was ± 4.4mm + 50.89 ppm and ± 4.96mm + 99.88 ppm for LDS1 and LDS2, respectively, depending on the LDM accuracy which was computed to the full range (100 m). Using these sensors can be very effective for industrial, 3D modeling purposes, and many other applications, especially that it is inexpensive and available in many versions.


2020 ◽  
Author(s):  
Qianli Liao

We consider the task of matrix sketching, which is obtaining a significantly smaller representation of matrix A while retaining most of its information (or in other words, approximates A well). In particular, we investigate a recent approach called Frequent Directions (FD) initially proposed by Liberty [5] in 2013, which has drawn wide attention due to its elegancy, nice theoretical guarantees and outstanding performance in practice. Two follow-up papers [3] and [2] in 2014 further refined the theoretical bounds as well as improved the practical performance. In this report, we summarize the three papers and propose a Generalized Frequent Directions (GFD) algorithm for matrix sketching, which captures all the previous FD algorithms as special cases without losing any of the theoretical bounds. Interestingly, our additive error bound seems to apply to the previously non-guaranteed well-performing heuristic iSVD.


Author(s):  
Ran Ben Basat ◽  
Gil Einziger ◽  
Michael Mitzenmacher ◽  
Shay Vargaftik
Keyword(s):  

Author(s):  
Shant Boodaghians ◽  
Federico Fusco ◽  
Stefano Leonardi ◽  
Yishay Mansour ◽  
Ruta Mehta

Efficient and truthful mechanisms to price time on remote servers/machines have been the subject of much work in recent years due to the importance of the cloud market. This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution. We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent's type, and the computed pricing scheme is deterministic. We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices.


Sign in / Sign up

Export Citation Format

Share Document