scholarly journals Multi-Kernel Polar Codes versus Classical Designs with Different Rate-Matching Approaches

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1717
Author(s):  
Souradip Saha ◽  
Marc Adrat

Polar codes, which have been proposed as a family of linear block codes, has garnered a lot of attention from the scientific community, owing to their low-complexity implementation and provably capacity-achieving capability. Thus, they have been proposed to be used for encoding information on the control channels in the upcoming 5G wireless networks. The basic approach introduced by Arikan in his landmark paper to polarize bit channels of equal capacities to those of unequal capacities can be used to design only codewords of length N=2n, which is a major limitation when codewords of different lengths are required for the underlying applications. In the predecessor paper, this aspect was partially addressed by using a 3×3 kernel circuit (used to generate codewords of length M=3m), along with downsizing techniques such as puncturing and shortening to asses the optimal design and resizing techniques based on the underlying system parameters. In this article, we extend this research to include the assessment of multi-kernel rate-matched polar codes for applicability over a much wider range of codeword lengths.

2014 ◽  
Vol 556-562 ◽  
pp. 6344-6349
Author(s):  
Yan Kang Wei ◽  
Da Ming Wang ◽  
Wei Jia Cui

SEU is one of the major challenges affecting the reliability of computers on-board. In this paper, we design a kind of encoding and decoding algorithms with a low complexity based on the data correction method to resolve the data stream errors SEU may bring. Firstly, we use the theory of linear block codes to analyze various methods of data fault tolerance, and then from the encoding and decoding principle of linear block codes we design a kind of encoding and decoding algorithms with a low complexity of linear block code, The fault-tolerant coding method can effectively correct single-bit data errors caused by SEU, with low fault-tolerant overhead. Fault injection experiments show that: this method can effectively correct data errors caused by single event upset. Compared with other common error detection or correction methods, error correction performance of this method is superior, while its fault tolerance cost is less.


2018 ◽  
Vol 127 ◽  
pp. 284-292 ◽  
Author(s):  
M.S. El Kasmi Alaoui ◽  
S. Nouh ◽  
A. Marzak

2018 ◽  
Vol 18 (9&10) ◽  
pp. 795-813
Author(s):  
Sunghoon Lee ◽  
Jooyoun Park ◽  
Jun Heo

Quantum key distribution (QKD) is a cryptographic system that generates an information-theoretically secure key shared by two legitimate parties. QKD consists of two parts: quantum and classical. The latter is referred to as classical post-processing (CPP). Information reconciliation is a part of CPP in which parties are given correlated variables and attempt to eliminate the discrepancies between them while disclosing a minimum amount of information. The elegant reconciliation protocol known as \emph{Cascade} was developed specifically for QKD in 1992 and has become the de-facto standard for all QKD implementations. However, the protocol is highly interactive. Thus, other protocols based on linear block codes such as Hamming codes, low-density parity-check (LDPC) codes, and polar codes have been researched. In particular, reconciliation using LDPC codes has been mainly studied because of its outstanding performance. Nevertheless, with small block size, the bit error rate performance of polar codes under successive-cancellation list (SCL) decoding with a cyclic redundancy check (CRC) is comparable to state-of-the-art turbo and LDPC codes. In this study, we demonstrate the use of polar codes to improve the performance of information reconciliation in a QKD system with small block size. The best decoder for polar codes, a CRC-aided SCL decoder, requires CRC-precoded messages. However, messages that are sifted keys in QKD are obtained arbitrarily as a result of a characteristic of the QKD protocol and cannot be CRC-precoded. We propose a method that allows arbitrarily obtained sifted keys to be CRC precoded by introducing a virtual string. Thus the best decoder can be used for reconciliation using polar codes and improves the efficiency of the protocol.


2020 ◽  
Vol 1 ◽  
pp. 333-341
Author(s):  
Chien-Ying Lin ◽  
Yu-Chih Huang ◽  
Shin-Lin Shieh ◽  
Po-Ning Chen

Author(s):  
R. A. Morozov ◽  
P. V. Trifonov

Introduction:Practical implementation of a communication system which employs a family of polar codes requires either to store a number of large specifications or to construct the codes by request. The first approach assumes extensive memory consumption, which is inappropriate for many applications, such as those for mobile devices. The second approach can be numerically unstable and hard to implement in low-end hardware. One of the solutions is specifying a family of codes by a sequence of subchannels sorted by reliability. However, this solution makes it impossible to separately optimize each code from the family.Purpose:Developing a method for compact specifications of polar codes and subcodes.Results:A method is proposed for compact specification of polar codes. It can be considered a trade-off between real-time construction and storing full-size specifications in memory. We propose to store compact specifications of polar codes which contain frozen set differences between the original pre-optimized polar codes and the polar codes constructed for a binary erasure channel with some erasure probability. Full-size specification needed for decoding can be restored from a compact one by a low-complexity hardware-friendly procedure. The proposed method can work with either polar codes or polar subcodes, allowing you to reduce the memory consumption by 15–50 times.Practical relevance:The method allows you to use families of individually optimized polar codes in devices with limited storage capacity. 


2021 ◽  
Vol 11 (8) ◽  
pp. 3563
Author(s):  
Martin Klimo ◽  
Peter Lukáč ◽  
Peter Tarábek

One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one-hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non-zero syndrome, but also to evaluate the truth-value of the decision. Experimental results show that the proposed approach achieves similar results as one-hot encoding with a softmax function in terms of accuracy, reliability, and out-of-distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.


Sign in / Sign up

Export Citation Format

Share Document