Pipeline processing in low-density parity-check codes hardware decoder

2011 ◽  
Vol 59 (2) ◽  
pp. 149-155 ◽  
Author(s):  
W. Sułek

Pipeline processing in low-density parity-check codes hardware decoderLow-Density Parity-Check (LDPC) codes are one of the best known error correcting coding methods. This article concerns the hardware iterative decoder for a subclass of LDPC codes that are implementation oriented, known also as Architecture Aware LDPC. The decoder has been implemented in a form of synthesizable VHDL description. To achieve high clock frequency of the decoder hardware implementation - and in consequence high data-throughput, a large number of pipeline registers has been used in the processing chain. However, the registers increase the processing path delay, since the number of clock cycles required for data propagating is increased. Thus in general the idle cycles must be introduced between decoding subiterations. In this paper we study the conditions for necessity of idle cycles and provide a method for calculation the exact number of required idle cycles on the basis of parity check matrix of the code. Then we propose a parity check matrix optimization method to minimize the total number of required idle cycles and hence, maximize the decoder throughput. The proposed matrix optimization by sorting rows and columns does not change the code properties. Results, presented in the paper, show that the decoder throughput can be significantly increased with the proposed optimization method.

2017 ◽  
Vol 1 (2) ◽  
pp. 88 ◽  
Author(s):  
Marco Baldi ◽  
Franco Chiaraluce

The authors face the problem of designing good LDPC codes for applications requiring variable, that is adaptive, rates. More precisely, the object of the paper is twofold. On one hand, we propose a deterministic (not random) procedureto construct good LDPC codes without constraints on the code dimension and rate. The method is based on the analysis and optimization of the local cycles length in the Tanner graph and gives the designer the chance to control complexity of the designed codes. On the other hand, we present a novel puncturing strategy which acts directly on the parity check matrix of the code, starting from the lowest rate needed, in order to allow the design of higher rate codes avoiding additional complexity of the co/decoding hardware. The efficiency of the proposed solution is tested through a number of numerical simulations. In particular, the puncturing strategy is applied for designing codes with rate variable between 0.715 and 0.906. The designed codes are used in conjunction with M-QAM constellations through a pragmatic approach that, however, yields very promising results.


2017 ◽  
Vol 11 (22) ◽  
pp. 1065-1073
Author(s):  
Yenny Alexandra Avendano Martinez ◽  
Octavio Jose Salcedo Parra ◽  
Giovanny Mauricio Tarazona Bermudez

LDPC (Low Density Parity Check Codes) is a set of algorithms that send, receive and correct in a noise environment, frames transmitted in a LAN environment. This article demonstrates the high performance of the LDPC in environments of noise, compared to the CRC error detection code highly currently implemented, in this way the efficiency of LDPC is shown specifically over the 802. 11n protocol.


2008 ◽  
Vol 17 (02) ◽  
pp. 333-351 ◽  
Author(s):  
K. M. S. SOYJAUDAH ◽  
P. C. CATHERINE

We introduce a recovery algorithm for low-density parity-check codes that provides substantial coding gain over the conventional method. Concisely, it consists of an inference procedure based on successive decoding rounds using different subsets of bit nodes from the bipartite graph representing the code. The technique also sheds light on certain characteristics of the sum–product algorithm and effectively copes with the problems of trapping sets, cycles, and other anomalies that adversely affect the performance LDPC codes.


2013 ◽  
Vol 710 ◽  
pp. 723-726
Author(s):  
Yuan Hua Liu ◽  
Mei Ling Zhang

A novel bit-flipping (BF) algorithm with low complexity for high-throughput decoding of low-density parity-check (LDPC) codes is presented. At each iteration, a novel threshold pattern is used to determine the code bits whether to be flipped or not, and the flipping error probability is effectively decreased. Compared with the weighted BF algorithm and its modifications, the modified BF algorithm has significantly lower complexity and decoding time. Through simulations the proposed BF algorithm is shown to achieve excellent performance and fast convergence speed while maintaining significantly low complexity thus facilitating high-throughput decoding.


Sign in / Sign up

Export Citation Format

Share Document