scholarly journals An Application of p-Fibonacci Error-Correcting Codes to Cryptography

Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 789
Author(s):  
Emanuele Bellini ◽  
Chiara Marcolla ◽  
Nadir Murru

In addition to their usefulness in proving one’s identity electronically, identification protocols based on zero-knowledge proofs allow designing secure cryptographic signature schemes by means of the Fiat–Shamir transform or other similar constructs. This approach has been followed by many cryptographers during the NIST (National Institute of Standards and Technology) standardization process for quantum-resistant signature schemes. NIST candidates include solutions in different settings, such as lattices and multivariate and multiparty computation. While error-correcting codes may also be used, they do not provide very practical parameters, with a few exceptions. In this manuscript, we explored the possibility of using the error-correcting codes proposed by Stakhov in 2006 to design an identification protocol based on zero-knowledge proofs. We showed that this type of code offers a valid alternative in the error-correcting code setting to build such protocols and, consequently, quantum-resistant signature schemes.

Author(s):  
Levon Arsalanyan ◽  
Hayk Danoyan

The Nearest Neighbor search algorithm considered in this paper is well known (Elias algorithm). It uses error-correcting codes and constructs appropriate hash-coding schemas. These schemas preprocess the data in the form of lists. Each list is contained in some sphere, centered at a code-word. The algorithm is considered for the cases of perfect codes, so the spheres and, consequently, the lists do not intersect. As such codes exist for the limited set of parameters, the algorithm is considered for some other generalizations of perfect codes, and then the same data point may be contained in different lists. A formula of time complexity of the algorithm is obtained for these cases, using coset weight structures of the mentioned codes


1998 ◽  
Vol 57 (3) ◽  
pp. 367-376 ◽  
Author(s):  
Chi-Kwong Li ◽  
Ingrid Nelson

We characterise all the perfect k-error correcting codes that can be defined on the graph associated with the Towers of Hanoi puzzle. In particular, a short proof for the existence of 1-error correcting code on such a graph is given.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 709
Author(s):  
Abhishek Das ◽  
Nur A. Touba

Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square codes. The proposed codes are also shown to achieve lower decoding latency compared to Hamming codes. Thus, the proposed codes achieve a balanced trade-off between memory overhead and decoding latency, which makes them highly suitable for on-chip cache memories which have stringent throughput and memory overhead constraints.


Author(s):  
Mark Hasegawa-Johnson ◽  
Jennifer Cole ◽  
Preethi Jyothi ◽  
Lav R. Varshney

AbstractTranscribers make mistakes. Workers recruited in a crowdsourcing marketplace, because of their varying levels of commitment and education, make more mistakes than workers in a controlled laboratory setting. Methods for compensating transcriber mistakes are desirable because, with such methods available, crowdsourcing has the potential to significantly increase the scale of experiments in laboratory phonology. This paper provides a brief tutorial on statistical learning theory, introducing the relationship between dataset size and estimation error, then presents a theoretical description and preliminary results for two new methods that control labeler error in laboratory phonology experiments. First, we discuss the method of crowdsourcing over error-correcting codes. In the error-correcting-code method, each difficult labeling task is first factored, by the experimenter, into the product of several easy labeling tasks (typically binary). Factoring increases the total number of tasks, nevertheless it results in faster completion and higher accuracy, because workers unable to perform the difficult task may be able to meaningfully contribute to the solution of each easy task. Second, we discuss the use of explicit mathematical models of the errors made by a worker in the crowd. In particular, we introduce the method of mismatched crowdsourcing, in which workers transcribe a language they do not understand, and an explicit mathematical model of second-language phoneme perception is used to learn and then compensate their transcription errors. Though introduced as technologies that increase the scale of phonology experiments, both methods have implications beyond increased scale. The method of easy questions permits us to probe the perception, by untrained listeners, of complicated phonological models; examples are provided from the prosody of English and Hindi. The method of mismatched crowdsourcing permits us to probe, in more detail than ever before, the perception of phonetic categories by listeners with a different phonological system.


2006 ◽  
Vol 04 (06) ◽  
pp. 1013-1022
Author(s):  
TAILIN LIU ◽  
FENGTONG WEN ◽  
QIAOYAN WEN

Based on the classical binary simplex code [Formula: see text] and any fixed-point-free element f of [Formula: see text], Calderbank et al. constructed a binary quantum error-correcting code [Formula: see text]. They proved that [Formula: see text] has a normal subgroup H, which is a semidirect product group of the centralizer Z(f) of f in GLm(2) with [Formula: see text], and the index [Formula: see text] is the number of elements of Ff = {f, 1 - f, 1/f, 1 - 1/f, 1/(1 - f), f/(1 - f)} that are conjugate to f. In this paper, a theorem to describe the relationship between the quotient group [Formula: see text] and the set Ff is presented, and a way to find the elements of Ff that are conjugate to f is proposed. Then we prove that [Formula: see text] is isomorphic to S3 and H is a semidirect product group of [Formula: see text] with [Formula: see text] in the linear case. Finally, we generalize a result due to Calderbank et al.


Author(s):  
A. V. Kushnerov ◽  
V. A. Lipinski ◽  
M. N. Koroliova

The Bose – Chaudhuri – Hocquenghem type of linear cyclic codes (BCH codes) is one of the most popular and widespread error-correcting codes. Their close connection with the theory of Galois fields gave an opportunity to create a theory of the norms of syndromes for BCH codes, namely, syndrome invariants of the G-orbits of errors, and to develop a theory of polynomial invariants of the G-orbits of errors. This theory as a whole served as the basis for the development of effective permutation polynomial-norm methods and error correction algorithms that significantly reduce the influence of the selector problem. To date, these methods represent the only approach to error correction with non-primitive BCH codes, the multiplicity of which goes beyond design boundaries. This work is dedicated to a special error-correcting code class – generic Bose – Chaudhuri – Hocquenghem codes or simply GBCH-codes. Sufficiently accurate evaluation of the quantity of such codes in each length was produced during our work. We have investigated some properties and connections between different GBCH-codes. Special attention was devoted to codes with constructive distances of 3 and 5, as those codes are usual for practical use. Their almost complete description is given in the range of lengths from 7 to 107. The paper contains a fairly clear theoretical classification of GBCH-codes. Special attention is paid to the corrective capabilities of the codes of this class, namely, to the calculation of the minimal distances of these codes with various parameters. The codes are found whose corrective capabilities significantly exceed those of the well-known GBCH-codes with the same design parameters.


Radiotekhnika ◽  
2021 ◽  
pp. 59-65
Author(s):  
S.O. Kandiy ◽  
G.A. Maleeva

In recent years, interest in cryptosystems based on multidimensional quadratic transformations (MQ transformations) has grown significantly. This is primarily due to the NIST PQC competition [1] and the need for practical electronic signature schemes that are resistant to attacks on quantum computers. Despite the fact that the world community has done a lot of work on cryptanalysis of the presented schemes, many issues need further clarification. NIST specialists are very cautious about the standardization process and urge cryptologists [4] in the next 3 years to conduct a comprehensive analysis of the finalists of the NIST PQC competition before their standardization. One of the finalists is the Rainbow electronic signature scheme [2]. It is a generalization of the UOV (Unbalanced Oil and Vinegar) scheme [3]. Recently, another generalization of this scheme – LUOV (Lifted UOV) [5] was found to attack [6], which in polynomial time is able to recover completely the private key. The peculiarity of this attack is the use of the algebraic structure of the field over which the MQ transformation is given. This line of attack has emerged recently and it is still unclear whether it is possible to use the field structure in the Rainbow scheme. The aim of this work is to systematize the techniques used in attacks using the algebraic field structure for UOV-based cryptosystems and to analyze the obstacles for their generalization to the Rainbow scheme.


Author(s):  
Prasanna Ravi ◽  
Sujoy Sinha Roy ◽  
Anupam Chattopadhyay ◽  
Shivam Bhasin

In this work, we demonstrate generic and practical EM side-channel assisted chosen ciphertext attacks over multiple LWE/LWR-based Public Key Encryption (PKE) and Key Encapsulation Mechanisms (KEM) secure in the chosen ciphertext model (IND-CCA security). We show that the EM side-channel information can be efficiently utilized to instantiate a plaintext checking oracle, which provides binary information about the output of decryption, typically concealed within IND-CCA secure PKE/KEMs, thereby enabling our attacks. Firstly, we identified EM-based side-channel vulnerabilities in the error correcting codes (ECC) enabling us to distinguish based on the value/validity of decrypted codewords. We also identified similar vulnerabilities in the Fujisaki-Okamoto transform which leaks information about decrypted messages applicable to schemes that do not use ECC. We subsequently exploit these vulnerabilities to demonstrate practical attacks applicable to six CCA-secure lattice-based PKE/KEMs competing in the second round of the NIST standardization process. We perform experimental validation of our attacks on implementations taken from the open-source pqm4 library, running on the ARM Cortex-M4 microcontroller. Our attacks lead to complete key-recovery in a matter of minutes on all the targeted schemes, thus showing the effectiveness of our attack.


Sign in / Sign up

Export Citation Format

Share Document