scholarly journals Reducing the Overhead of BCH Codes: New Double Error Correction Codes

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1897
Author(s):  
Luis-J. Saiz-Adalid ◽  
Joaquín Gracia-Morán ◽  
Daniel Gil-Tomás ◽  
J.-Carlos Baraza-Calvo ◽  
Pedro-J. Gil-Vicente

The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors.

Author(s):  
A. V. Kushnerov ◽  
V. A. Lipinski ◽  
M. N. Koroliova

The Bose – Chaudhuri – Hocquenghem type of linear cyclic codes (BCH codes) is one of the most popular and widespread error-correcting codes. Their close connection with the theory of Galois fields gave an opportunity to create a theory of the norms of syndromes for BCH codes, namely, syndrome invariants of the G-orbits of errors, and to develop a theory of polynomial invariants of the G-orbits of errors. This theory as a whole served as the basis for the development of effective permutation polynomial-norm methods and error correction algorithms that significantly reduce the influence of the selector problem. To date, these methods represent the only approach to error correction with non-primitive BCH codes, the multiplicity of which goes beyond design boundaries. This work is dedicated to a special error-correcting code class – generic Bose – Chaudhuri – Hocquenghem codes or simply GBCH-codes. Sufficiently accurate evaluation of the quantity of such codes in each length was produced during our work. We have investigated some properties and connections between different GBCH-codes. Special attention was devoted to codes with constructive distances of 3 and 5, as those codes are usual for practical use. Their almost complete description is given in the range of lengths from 7 to 107. The paper contains a fairly clear theoretical classification of GBCH-codes. Special attention is paid to the corrective capabilities of the codes of this class, namely, to the calculation of the minimal distances of these codes with various parameters. The codes are found whose corrective capabilities significantly exceed those of the well-known GBCH-codes with the same design parameters.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jianguo Xie ◽  
Han Wu ◽  
Chao Xia ◽  
Peng Ding ◽  
Helun Song ◽  
...  

AbstractSemiconductor superlattice secure key distribution (SSL-SKD) has been experimentally demonstrated to be a novel scheme to generate and agree on the identical key in unconditional security just by public channel. The error correction in the information reconciliation procedure is introduced to eliminate the inevitable differences of analog systems in SSL-SKD. Nevertheless, the error correction has been proved to be the performance bottleneck of information reconciliation for high computational complexity. Hence, it determines the final secure key throughput of SSL-SKD. In this paper, different frequently-used error correction codes, including BCH codes, LDPC codes, and Polar codes, are optimized separately to raise the performance, making them usable in practice. Firstly, we perform multi-threading to support multi-codeword decoding for BCH codes and Polar codes and updated value calculation for LDPC codes. Additionally, we construct lookup tables to reduce redundant calculations, such as logarithmic table and antilogarithmic table for finite field computation. Our experimental results reveal that our proposed optimization methods can significantly promote the efficiency of SSL-SKD, and three error correction codes can reach the throughput of Mbps and provide a minimum secure key rate of 99%.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2009
Author(s):  
Fatemeh Najafi ◽  
Masoud Kaveh ◽  
Diego Martín ◽  
Mohammad Reza Mosavi

Traditional authentication techniques, such as cryptographic solutions, are vulnerable to various attacks occurring on session keys and data. Physical unclonable functions (PUFs) such as dynamic random access memory (DRAM)-based PUFs are introduced as promising security blocks to enable cryptography and authentication services. However, PUFs are often sensitive to internal and external noises, which cause reliability issues. The requirement of additional robustness and reliability leads to the involvement of error-reduction methods such as error correction codes (ECCs) and pre-selection schemes that cause considerable extra overheads. In this paper, we propose deep PUF: a deep convolutional neural network (CNN)-based scheme using the latency-based DRAM PUFs without the need for any additional error correction technique. The proposed framework provides a higher number of challenge-response pairs (CRPs) by eliminating the pre-selection and filtering mechanisms. The entire complexity of device identification is moved to the server side that enables the authentication of resource-constrained nodes. The experimental results from a 1Gb DDR3 show that the responses under varying conditions can be classified with at least a 94.9% accuracy rate by using CNN. After applying the proposed authentication steps to the classification results, we show that the probability of identification error can be drastically reduced, which leads to a highly reliable authentication.


2005 ◽  
Vol 4 (9) ◽  
pp. 586 ◽  
Author(s):  
Jaime A. Anguita ◽  
Ivan B. Djordjevic ◽  
Mark A. Neifeld ◽  
Bane V. Vasic

IEEE Access ◽  
2016 ◽  
Vol 4 ◽  
pp. 7154-7175 ◽  
Author(s):  
Matthew F. Brejza ◽  
Tao Wang ◽  
Wenbo Zhang ◽  
David Al-Khalili ◽  
Robert G. Maunder ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document