Bootstrapped Low Complexity Iterative Decoding Algorithm for Low Density Parity Check (LDPC) Codes

Author(s):  
Albashir Mohamed ◽  
Maha Elsabrouty ◽  
Salwa El-Ramly
2018 ◽  
Vol 7 (03) ◽  
pp. 23781-23784
Author(s):  
Rajarshini Mishra

Low-density parity-check (LDPC) have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost , time, power and bandwidth requirements of target applications. Quasi-cyclic low-density parity-check (QC-LDPC) codes are an important subclass of LDPC codes that are known as one of the most effective error controlling methods. Quasi cyclic codes are known to possess some degree of regularity. Many important communication standards such as DVB-S2 and 802.16e use these codes. The proposed Optimized Min-Sum decoding algorithm performs very close to the Sum-Product decoding while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.Proposed decoder is well matched for VLSI implementation and will be implemented on Xilinx FPGA family


Author(s):  
Zhong-xun Wang ◽  
Yang Xi ◽  
Zhan-kai Bao

In the nonbinary low-density parity check (NB-LDPC) codes decoding algorithms, the iterative hard reliability based on majority logic decoding (IHRB-MLGD) algorithm has poor error correction performance. The essential reason is that the hard information is used in the initialization and iterative processes. For the problem of partial loss of information, when the reliability is assigned during initialization, the error correction performance is improved by modifying the assignment of reliability at initialization. The initialization process is determined by the probability of occurrence of the number of erroneous bits in the symbol and the Hamming distance. In addition, the IHRB-MLGD decoding algorithm uses the hard decision in the iterative decoding process. The improved algorithm adds soft decision information in the iterative process, which improves the error correction performance while only slightly increasing the decoding complexity, and improves the reliability accumulation process which makes the algorithm more stable. The simulation results indicate that the proposed algorithm has a better decoding performance than IHRB algorithm.


2013 ◽  
Vol 710 ◽  
pp. 723-726
Author(s):  
Yuan Hua Liu ◽  
Mei Ling Zhang

A novel bit-flipping (BF) algorithm with low complexity for high-throughput decoding of low-density parity-check (LDPC) codes is presented. At each iteration, a novel threshold pattern is used to determine the code bits whether to be flipped or not, and the flipping error probability is effectively decreased. Compared with the weighted BF algorithm and its modifications, the modified BF algorithm has significantly lower complexity and decoding time. Through simulations the proposed BF algorithm is shown to achieve excellent performance and fast convergence speed while maintaining significantly low complexity thus facilitating high-throughput decoding.


2013 ◽  
Vol 340 ◽  
pp. 471-475
Author(s):  
Fei Zhong ◽  
Shu Xu Guo

To improve upon the Low-Density Parity-Check (LDPC) codes , incorporating compressed sensing (CS) and information redundancy, a new joint decoding algorithm frame is presented. The proposed system exploits the information redundancy by CS reconstruction during the iterative decoding process to correct decoding of LDPC codes. The simulation results show that the algorithm presented can improve system decoding performance and obviously make bit error ratio (BER) lower then traditional LDPC codes. In addition, a relatively short argument is given on different CS reconstructed algorithms in proposed system, the new design is shown to benefit from different CS reconstructed algorithms.


Sign in / Sign up

Export Citation Format

Share Document