scholarly journals Perfect codes on the towers of Hanoi graph

1998 ◽  
Vol 57 (3) ◽  
pp. 367-376 ◽  
Author(s):  
Chi-Kwong Li ◽  
Ingrid Nelson

We characterise all the perfect k-error correcting codes that can be defined on the graph associated with the Towers of Hanoi puzzle. In particular, a short proof for the existence of 1-error correcting code on such a graph is given.

Author(s):  
Levon Arsalanyan ◽  
Hayk Danoyan

The Nearest Neighbor search algorithm considered in this paper is well known (Elias algorithm). It uses error-correcting codes and constructs appropriate hash-coding schemas. These schemas preprocess the data in the form of lists. Each list is contained in some sphere, centered at a code-word. The algorithm is considered for the cases of perfect codes, so the spheres and, consequently, the lists do not intersect. As such codes exist for the limited set of parameters, the algorithm is considered for some other generalizations of perfect codes, and then the same data point may be contained in different lists. A formula of time complexity of the algorithm is obtained for these cases, using coset weight structures of the mentioned codes


Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 789
Author(s):  
Emanuele Bellini ◽  
Chiara Marcolla ◽  
Nadir Murru

In addition to their usefulness in proving one’s identity electronically, identification protocols based on zero-knowledge proofs allow designing secure cryptographic signature schemes by means of the Fiat–Shamir transform or other similar constructs. This approach has been followed by many cryptographers during the NIST (National Institute of Standards and Technology) standardization process for quantum-resistant signature schemes. NIST candidates include solutions in different settings, such as lattices and multivariate and multiparty computation. While error-correcting codes may also be used, they do not provide very practical parameters, with a few exceptions. In this manuscript, we explored the possibility of using the error-correcting codes proposed by Stakhov in 2006 to design an identification protocol based on zero-knowledge proofs. We showed that this type of code offers a valid alternative in the error-correcting code setting to build such protocols and, consequently, quantum-resistant signature schemes.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 709
Author(s):  
Abhishek Das ◽  
Nur A. Touba

Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square codes. The proposed codes are also shown to achieve lower decoding latency compared to Hamming codes. Thus, the proposed codes achieve a balanced trade-off between memory overhead and decoding latency, which makes them highly suitable for on-chip cache memories which have stringent throughput and memory overhead constraints.


Author(s):  
Mark Hasegawa-Johnson ◽  
Jennifer Cole ◽  
Preethi Jyothi ◽  
Lav R. Varshney

AbstractTranscribers make mistakes. Workers recruited in a crowdsourcing marketplace, because of their varying levels of commitment and education, make more mistakes than workers in a controlled laboratory setting. Methods for compensating transcriber mistakes are desirable because, with such methods available, crowdsourcing has the potential to significantly increase the scale of experiments in laboratory phonology. This paper provides a brief tutorial on statistical learning theory, introducing the relationship between dataset size and estimation error, then presents a theoretical description and preliminary results for two new methods that control labeler error in laboratory phonology experiments. First, we discuss the method of crowdsourcing over error-correcting codes. In the error-correcting-code method, each difficult labeling task is first factored, by the experimenter, into the product of several easy labeling tasks (typically binary). Factoring increases the total number of tasks, nevertheless it results in faster completion and higher accuracy, because workers unable to perform the difficult task may be able to meaningfully contribute to the solution of each easy task. Second, we discuss the use of explicit mathematical models of the errors made by a worker in the crowd. In particular, we introduce the method of mismatched crowdsourcing, in which workers transcribe a language they do not understand, and an explicit mathematical model of second-language phoneme perception is used to learn and then compensate their transcription errors. Though introduced as technologies that increase the scale of phonology experiments, both methods have implications beyond increased scale. The method of easy questions permits us to probe the perception, by untrained listeners, of complicated phonological models; examples are provided from the prosody of English and Hindi. The method of mismatched crowdsourcing permits us to probe, in more detail than ever before, the perception of phonetic categories by listeners with a different phonological system.


2006 ◽  
Vol 04 (06) ◽  
pp. 1013-1022
Author(s):  
TAILIN LIU ◽  
FENGTONG WEN ◽  
QIAOYAN WEN

Based on the classical binary simplex code [Formula: see text] and any fixed-point-free element f of [Formula: see text], Calderbank et al. constructed a binary quantum error-correcting code [Formula: see text]. They proved that [Formula: see text] has a normal subgroup H, which is a semidirect product group of the centralizer Z(f) of f in GLm(2) with [Formula: see text], and the index [Formula: see text] is the number of elements of Ff = {f, 1 - f, 1/f, 1 - 1/f, 1/(1 - f), f/(1 - f)} that are conjugate to f. In this paper, a theorem to describe the relationship between the quotient group [Formula: see text] and the set Ff is presented, and a way to find the elements of Ff that are conjugate to f is proposed. Then we prove that [Formula: see text] is isomorphic to S3 and H is a semidirect product group of [Formula: see text] with [Formula: see text] in the linear case. Finally, we generalize a result due to Calderbank et al.


Author(s):  
A. V. Kushnerov ◽  
V. A. Lipinski ◽  
M. N. Koroliova

The Bose – Chaudhuri – Hocquenghem type of linear cyclic codes (BCH codes) is one of the most popular and widespread error-correcting codes. Their close connection with the theory of Galois fields gave an opportunity to create a theory of the norms of syndromes for BCH codes, namely, syndrome invariants of the G-orbits of errors, and to develop a theory of polynomial invariants of the G-orbits of errors. This theory as a whole served as the basis for the development of effective permutation polynomial-norm methods and error correction algorithms that significantly reduce the influence of the selector problem. To date, these methods represent the only approach to error correction with non-primitive BCH codes, the multiplicity of which goes beyond design boundaries. This work is dedicated to a special error-correcting code class – generic Bose – Chaudhuri – Hocquenghem codes or simply GBCH-codes. Sufficiently accurate evaluation of the quantity of such codes in each length was produced during our work. We have investigated some properties and connections between different GBCH-codes. Special attention was devoted to codes with constructive distances of 3 and 5, as those codes are usual for practical use. Their almost complete description is given in the range of lengths from 7 to 107. The paper contains a fairly clear theoretical classification of GBCH-codes. Special attention is paid to the corrective capabilities of the codes of this class, namely, to the calculation of the minimal distances of these codes with various parameters. The codes are found whose corrective capabilities significantly exceed those of the well-known GBCH-codes with the same design parameters.


Author(s):  
Raymond M. Smullyan

The history of mathematics is filled with major breakthroughs resulting from solutions to recreational problems. Problems of interest to gamblers led to the modern theory of probability, for example, and surreal numbers were inspired by the game of Go. Yet even with such groundbreaking findings and a wealth of popular-level books exploring puzzles and brainteasers, research in recreational mathematics has often been neglected. This book brings together authors from a variety of specialties to present fascinating problems and solutions in recreational mathematics. The chapters show how sophisticated mathematics can help construct mazes that look like famous people, how the analysis of crossword puzzles has much in common with understanding epidemics, and how the theory of electrical circuits is useful in understanding the classic Towers of Hanoi puzzle. The card game SET® is related to the theory of error-correcting codes, and simple tic-tac-toe takes on a new life when played on an affine plane. Inspirations for the book's wealth of problems include board games, card tricks, fake coins, flexagons, pencil puzzles, poker, and so much more. Looking at a plethora of eclectic games and puzzles, this book is sure to entertain, challenge, and inspire academic mathematicians and avid math enthusiasts alike.


2008 ◽  
Vol 17 (05) ◽  
pp. 773-783 ◽  
Author(s):  
HEESUNG LEE ◽  
EUNTAI KIM

Error correcting codes (ECCs) are commonly used as a protection against the soft errors. Single error correcting and double error detecting (SEC–DED) codes are generally used for this purpose. Such circuits are widely used in industry in all types of memory, including caches and embedded memory. In this paper, a new genetic design for ECC is proposed to perform SEC–DED in the memory check circuit. The design is aimed at finding the implementation of ECC which consumes minimal power. We formulate the ECC design into a permutable optimization problem and employ special genetic operators appropriate for this formulation. Experiments are performed to demonstrate the performance of the proposed method.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 938
Author(s):  
Jiabo Wang ◽  
Cong Ling

There exists a natural trade-off in public key encryption (PKE) schemes based on ring learning with errors (RLWE), namely: we would like a wider error distribution to increase the security, but it comes at the cost of an increased decryption failure rate (DFR). A straightforward solution to this problem is the error-correcting code, which is commonly used in communication systems and already appears in some RLWE-based proposals. However, applying error-correcting codes to those cryptographic schemes is far from simply installing an add-on. Firstly, the residue error term derived by decryption has correlated coefficients, whereas most prevalent error-correcting codes with remarkable error tolerance assume the channel noise to be independent and memoryless. This explains why only simple error-correcting methods are used in existing RLWE-based PKE schemes. Secondly, the residue error term has correlated coefficients leaving accurate DFR estimation challenging even for uncoded plaintext. It can be found in the literature that a tighter DFR estimation can effectively create a DFR margin. Thirdly, most error-correcting codes are not well designed for safety considerations, e.g., syndrome decoding has a nonconstant time nature. A code good at error correcting might be weak under a variety of attacks. In this work, we propose a polar coding scheme for RLWE-based PKE. A relaxed “independence” assumption is used to derive an uncorrelated residue noise term, and a wireless communication strategy, outage, is used to construct polar codes. Furthermore, some knowledge about the residue noise is exploited to improve the decoding performance. With the parameterization of NewHope Round 2, the proposed scheme creates a considerable DRF margin, which gives a competitive security improvement compared to state-of-the-art benchmarks. Specifically, the security is improved by 28.8%, while a DFR of 2−149 is achieved a for code rate pf 0.25, n=1024,q= 12,289, and binomial parameter k=55. Moreover, polar encoding and decoding have a quasilinear complexity O(Nlog2N) and intrinsically support constant-time implementations.


2019 ◽  
Vol 73 (1) ◽  
pp. 83-96
Author(s):  
Pál Dömösi ◽  
Carolin Hannusch ◽  
Géza Horváth

Abstract In this paper we introduce a new cryptographic system which is based on the idea of encryption due to [McEliece, R. J. A public-key cryptosystem based on algebraic coding theory, DSN Progress Report. 44, 1978, 114–116]. We use the McEliece encryption system with a new linear error-correcting code, which was constructed in [Hannusch, C.—Lakatos, P.: Construction of self-dual binary 22k, 22k−1, 2k-codes, Algebra and Discrete Math. 21 (2016), no. 1, 59–68]. We show how encryption and decryption work within this cryptosystem and we give the parameters for key generation. Further, we explain why this cryptosystem is a promising post-quantum candidate.


Sign in / Sign up

Export Citation Format

Share Document