random codes
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 22)

H-INDEX

15
(FIVE YEARS 1)

Author(s):  
Divyansh Joshi

Abstract: Identity theft is a frightening and often very serious concern to everyone. A novel risk-mitigation algorithm, the Hybrid Transaction Algorithm, is given in an effort to provide individuals with peace of mind (HTA). With the random codes, the proposed HTA aims to implement two-factor authentication. This kind of user authentication has been generally recognized, and many businesses have begun to employ it as a security feature. This may be used to identify people and provide a secure method of buying products online. The suggested method involves using mobile devices to log into card accounts using an application in order to examine the randomly generated code. This is then entered when required on an online retailer's website in order to verify the person making the transaction. This reduces the chance of an unauthorized user using someone else's details to make fraudulent transactions. Identity thieves cannot use stolen card information to make transactions unless they have a valid code. This, in turn, protects both the customer and the credit card companies, who may be financially affected. We give one case study to demonstrate the security of our methodology in order to better understand how it may safeguard someone from having a stolen credit card used. Keywords: Two-Factor Authentication; Hybrid Transaction Algorithm (HTA); AES Encryption; SHA–256.


2021 ◽  
Author(s):  
Vivian Papadopoulou ◽  
Marzieh Hashemipour-Nazari ◽  
Alexios Balatsoukas-Stimming
Keyword(s):  

Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 539
Author(s):  
Ralf R. Müller

In 2017, Polyanskiy showed that the trade-off between power and bandwidth efficiency for massive Gaussian random access is governed by two fundamentally different regimes: low power and high power. For both regimes, tight performance bounds were found by Zadik et al., in 2019. This work utilizes recent results on the exact block error probability of Gaussian random codes in additive white Gaussian noise to propose practical methods based on iterative soft decoding to closely approach these bounds. In the low power regime, this work finds that orthogonal random codes can be applied directly. In the high power regime, a more sophisticated effort is needed. This work shows that power-profile optimization by means of linear programming, as pioneered by Caire et al. in 2001, is a promising strategy to apply. The proposed combination of orthogonal random coding and iterative soft decoding even outperforms the existence bounds of Zadik et al. in the low power regime and is very close to the non-existence bounds for message lengths around 100 and above. Finally, the approach of power optimization by linear programming proposed for the high power regime is found to benefit from power imbalances due to fading which makes it even more attractive for typical mobile radio channels.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
J. Pablo Bonilla Ataides ◽  
David K. Tuckett ◽  
Stephen D. Bartlett ◽  
Steven T. Flammia ◽  
Benjamin J. Brown

AbstractPerforming large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code—the XZZX code—offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.


2021 ◽  
Vol 4 (1) ◽  
pp. 91-99
Author(s):  
Valeriy S. Hlukhov

Recently, interest is growing towards real quantum computers, which are analog and probabilistic devices by nature. The interest is also growing to their digital version, both software and hardware. One approach to the construction of real quantum computers is to use quantum chips. The hardware implementation of digital quantum computers involves the use of field programmable gate arrays. A digital quantum coprocessor has already been created which has over a thousand digital qubits and can perform such complex algorithms as a quantum Fourier transformation. The created and working digital quantum coprocessor can already be used to work out various quantum algorithms, algorithms for the interaction of a classic computer and its quantum coprocessor, as well as for research various options for building digital qubits. The purpose of this work is to study the effect of the accuracy of the presentation of the state of digital qubit on the probability of obtaining the correct results of the digital quantum coprocessor. For the study, a heterogeneous digital quantum coprocessor with thirty two digital qubits is selected, which will perform the Fourier quantum transformation. The article describes the basics of building digital quantum coprocessors. Schemes that illustrate the interaction of a classic computer and a quantum coprocessor, the architecture of the coprocessor and the possible structures of its digital qubits are given. Two variants of the coprocessor, homogeneous one with one pseudo-random codes generator and one comparator, and heterogeneous one, with a generator and a comparator in each digital quantum cell, from which digital qubits consist, are shown. Two options for comparators are also shown - with a direct functional converter and with reverse one. In this work, the influence of the length of the qubit state codes of heterogeneous digital quantum coprocessors on the probability of the correct results formation is investigated. It was shown that the probability of obtaining the correct results at the output of the digital heterogeneous coprocessor is sharply (up to fifty percent) improved with a decrease of the qubit state code length, that is, with a decrease in the coprocessor hardware cost. With a length of a code equal to two bits, the quality of the operation of the heterogeneous coprocessor becomes commensurate with the quality of the homogeneous one. The need for additional research in this direction, including with homogeneous coprocessors, is shown.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 265
Author(s):  
Ran Tamir (Averbuch) ◽  
Neri Merhav

Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.


2021 ◽  
Author(s):  
Zhongxu Liu ◽  
Xiaodi You ◽  
Zixian Wei ◽  
Zhaoming Wang ◽  
Mutong Li ◽  
...  
Keyword(s):  

Author(s):  
V. Chebachev

The article is devoted to SAW RFID systems. A problem of collision resolution by using a correlation method is under consideration. Collision is resolves by means of two correlation features, by the correlation function of the signal and by the correlation function of the DCT from the signal. A program, which is implements such collision resolution algorithm, was written. A collision is creating by using a simulation model of RFID tag. The model can generate a response from tag with «own» or with a random code, or can generate a collision of responses with random codes, which may contain or not contain an «own» code. The results of the algorithm’s operation are presented for the cases of reading from two to ten tags simultaneously. In addition to the main function, the algorithm implements error-correcting coding and decoding functions. It allows eliminating errors in case when there is no collision of tags, but a level of channel noise is high. The error-correcting coding is implemented by Reed-Solomon codes.


Sign in / Sign up

Export Citation Format

Share Document