irregular ldpc codes
Recently Published Documents


TOTAL DOCUMENTS

143
(FIVE YEARS 9)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
А.В. Башкиров ◽  
И.В. Свиридова ◽  
Т.Д. Ижокина ◽  
Е.А. Зубкова ◽  
О.В. Свиридова ◽  
...  

Aналитический подход к определению оптимальной функции постобработки для минимальной операции в алгоритме MIN-SUM, ранее полученный для обычных кодов проверки на четность с низкой плотностью (LDPC-коды), распространяется на нерегулярные коды LDPC. Оптимальное выражение постобработки для нестандартного случая варьируется от одного контрольного узла к другому, а также от одной итерации к следующей. Для практического использования необходимо аппроксимировать эту оптимальную функцию. В отличие от обычного случая, когда можно было бы использовать уникальную функцию постобработки на протяжении всего процесса декодирования без потери производительности битовых ошибок, для нерегулярных кодов критически важно варьировать постобработку от одной итерации к следующей, чтобы добиться хорошей производительности. С использованием этого подхода было выявлено, что качество битовых ошибок от алгоритма распространения доверия соответствует улучшению на 1 дБ по сравнению с MIN-SUM алгоритмом без постобработки. Сначала будет представлен обзор подхода и представлена аналитическая основа для оптимальной постобработки. Далее будет представлена оптимальная функция постобработки для нерегулярных кодов и обсуждены возможные упрощения. И наконец, показаны результаты моделирования и преимущества аппроксимации We extended an analytical approach to determining the optimal post-processing function for the minimum operation in the MIN-SUM algorithm, previously obtained for conventional low density parity check codes (LDPC codes), to irregular LDPC codes. The optimal post-processing expression for the non-standard case varies from one control node to another, as well as from one iteration to the next. For practical use, it is necessary to approximate this optimal function. Unlike the usual case where one could use a unique post-processing function throughout the entire decoding process without sacrificing bit code performance, it is critical for irregular codes to distinguish post-processing from one iteration to the next in order to achieve good performance. Using this approach, we found that the quality of bit errors from the trust algorithm corresponds to an improvement of 1 level compared to the MIN-SUM algorithm without post-processing. First, we provide an overview and analytical framework for optimal post-processing. Then, we present the optimal post-processing function for irregular codes and discuss possible simplifications. Finally, we show the simulation results and the benefits of the approximation


2020 ◽  
Vol 2 (2) ◽  
pp. 1-5
Author(s):  
Shahnas P

The LDPC (Low Density Parity Check Code) has Shown interesting results for transmitting embedded bit streams over noisy communication channels. Performance comparison of regular and irregular LDPC codes with SPIHT coded image is done here. Different Error Sensitive classes of image data are obtained by using SPIHT algorithm as an image coder. Irregular LDPC codes map the more important class of data into a higher degree protection class to provide more protection. Different degree protection classes of an LDPC code improves the overall performance of data transmission against channel errors. Simulation results show the superiority of irregular LDPC over regular LDPC codes.


2020 ◽  
Vol 113 (1) ◽  
pp. 453-468
Author(s):  
R. Mahalakshmi ◽  
P. V. Bhuvaneshwari ◽  
C. Tharini ◽  
Vidhyacharan Bhaskar

2020 ◽  
Vol 68 (3) ◽  
pp. 1329-1343 ◽  
Author(s):  
Michael Meidlinger ◽  
Gerald Matz ◽  
Andreas Burg

2018 ◽  
Vol 8 (10) ◽  
pp. 1884 ◽  
Author(s):  
Maximilian Stark ◽  
Jan Lewandowsky ◽  
Gerhard Bauch

In high-throughput applications, low-complexity and low-latency channel decoders are inevitable. Hence, for low-density parity-check (LDPC) codes, message passing decoding has to be implemented with coarse quantization—that is, the exchanged beliefs are quantized with a small number of bits. This can result in a significant performance degradation with respect to decoding with high-precision messages. Recently, so-called information-bottleneck decoders were proposed which leverage a machine learning framework (i.e., the information bottleneck method) to design coarse-precision decoders with error-correction performance close to high-precision belief-propagation decoding. In these decoders, all conventional arithmetic operations are replaced by look-up operations. Irregular LDPC codes for next-generation fiber optical communication systems are characterized by high code rates and large maximum node degrees. Consequently, the implementation complexity is mainly influenced by the memory required to store the look-up tables. In this paper, we show that the complexity of information-bottleneck decoders remains manageable for irregular LDPC codes if our proposed construction approach is deployed. Furthermore, we reveal that in order to design information bottleneck decoders for arbitrary degree distributions, an intermediate construction step which we call message alignment has to be included. Exemplary numerical simulations show that incorporating message alignment in the construction yields a 4-bit information bottleneck decoder which performs only 0.15 dB worse than a double-precision belief propagation decoder and outperforms a min-sum decoder.


Sign in / Sign up

Export Citation Format

Share Document