scholarly journals Polarization-adjusted Convolutional (PAC) Codes: Sequential Decoding vs List Decoding

Author(s):  
Mohammad Rowshan ◽  
Andreas Burg ◽  
Emanuele Viterbo

In the Shannon lecture at the 2019 International Symposium on Information Theory (ISIT), Arıkan proposed to employ a one-to-one convolutional transform as a pre-coding step before the polar transform. The resulting codes of this concatenation are called polarization-adjusted convolutional (PAC) codes. In this scheme, a pair of polar mapper and demapper as pre- and postprocessing devices are deployed around a memoryless channel, which provides polarized information to an outer decoder leading to improved error correction performance of the outer code. In this paper, the list decoding and sequential decoding (including Fano decoding and stack decoding) are first adapted for use to decode PAC codes. Then, to reduce the complexity of sequential decoding of PAC/polar codes, we propose (i) an adaptive heuristic metric, (ii) tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and (iii) tree search strategies consistent with the pattern of error occurrence in polar codes. These contribute to the reduction of the average decoding time complexity from 50% to 80%, trading with 0.05 to 0.3 dB degradation in error correction performance within FER=10^-3 range, respectively, relative to not applying the corresponding search strategies. Additionally, as an important ingredient in Fano decoding of PAC/polar codes, an efficient computation method for the intermediate LLRs and partial sums is provided. This method is effective in backtracking and avoids storing the intermediate information or restarting the decoding process. Eventually, all three decoding algorithms are compared in terms of performance, complexity, and resource requirements.

2021 ◽  
Author(s):  
Mohammad Rowshan ◽  
Andreas Burg ◽  
Emanuele Viterbo

In the Shannon lecture at the 2019 International Symposium on Information Theory (ISIT), Arıkan proposed to employ a one-to-one convolutional transform as a pre-coding step before the polar transform. The resulting codes of this concatenation are called polarization-adjusted convolutional (PAC) codes. In this scheme, a pair of polar mapper and demapper as pre- and postprocessing devices are deployed around a memoryless channel, which provides polarized information to an outer decoder leading to improved error correction performance of the outer code. In this paper, the list decoding and sequential decoding (including Fano decoding and stack decoding) are first adapted for use to decode PAC codes. Then, to reduce the complexity of sequential decoding of PAC/polar codes, we propose (i) an adaptive heuristic metric, (ii) tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and (iii) tree search strategies consistent with the pattern of error occurrence in polar codes. These contribute to the reduction of the average decoding time complexity from 50% to 80%, trading with 0.05 to 0.3 dB degradation in error correction performance within FER=10^-3 range, respectively, relative to not applying the corresponding search strategies. Additionally, as an important ingredient in Fano decoding of PAC/polar codes, an efficient computation method for the intermediate LLRs and partial sums is provided. This method is effective in backtracking and avoids storing the intermediate information or restarting the decoding process. Eventually, all three decoding algorithms are compared in terms of performance, complexity, and resource requirements.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 841
Author(s):  
Hanwen Yao ◽  
Arman Fazeli ◽  
Alexander Vardy

Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L⩾128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Takumi Murata ◽  
Hideki Ochiai

Successive cancellation list (SCL) decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC) decoding, provided that proper cyclic redundancy-check (CRC) codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER) performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR) per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.


2021 ◽  
pp. 1-1
Author(s):  
Yanlong Zhao ◽  
Zhendong Yin ◽  
Zhilu Wu ◽  
Mingdong Xu
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1955
Author(s):  
Md Jubaer Hossain Pantho ◽  
Pankaj Bhowmik ◽  
Christophe Bobda

The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities.


2019 ◽  
Vol 23 (10) ◽  
pp. 1757-1760
Author(s):  
Jiahao Wang ◽  
Zhenyu Hu ◽  
Ning An ◽  
Dunfan Ye

Author(s):  
Mark M. Wilde

Because a quantum measurement generally disturbs the state of a quantum system, one might think that it should not be possible for a sender and receiver to communicate reliably when the receiver performs a large number of sequential measurements to determine the message of the sender. We show here that this intuition is not true, by demonstrating that a sequential decoding strategy works well even in the most general ‘one-shot’ regime, where we are given a single instance of a channel and wish to determine the maximal number of bits that can be communicated up to a small failure probability. This result follows by generalizing a non-commutative union bound to apply for a sequence of general measurements. We also demonstrate two ways in which a receiver can recover a state close to the original state after it has been decoded by a sequence of measurements that each succeed with high probability. The second of these methods will be useful in realizing an efficient decoder for fully quantum polar codes, should a method ever be found to realize an efficient decoder for classical-quantum polar codes.


Sign in / Sign up

Export Citation Format

Share Document