scholarly journals Efficient MAP-algorithm implementation on programmable architectures

2003 ◽  
Vol 1 ◽  
pp. 259-263 ◽  
Author(s):  
F. Kienle ◽  
H. Michel ◽  
F. Gilbert ◽  
N. Wehn

Abstract. Maximum-A-Posteriori (MAP) decoding algorithms are important HW/SW building blocks in advanced communication systems due to their ability to provide soft-output informations which can be efficiently exploited in iterative channel decoding schemes like Turbo-Codes. Multi-standards demand flexible implementations on programmable platforms. In this paper we analyze a quantized turbo-decoder based on a Max-Log-MAP algorithm with Extrinsic Scaling Factor (ESF). Its communication performance approximate to a Turbo-Decoder with a Log-MAP algorithm and is less sensitive to quantization effects. We present Turbo-Decoder implementations on state-of-the-art DSPs and show that only a Max-Log-MAP implementation fulfills a throughput requirement of ~2 Mbit/s. The negligible overhead for the ESF implementation strengthen the use of Max-Log-MAP with ESF implementation on programmable platforms.

Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 814
Author(s):  
Jun Li ◽  
Xiumin Wang ◽  
Jinlong He ◽  
Chen Su ◽  
Liang Shan

Turbo codes have been widely used in wireless communication systems due to their good error correction performance. Under time division long term evolution (TD-LTE) of the 3rd generation partnership project (3GPP) wireless communication standard, a Log maximum a posteriori (Log-MAP) decoding algorithm with high complexity is usually approximated as a lookup-table Log-MAP (LUT-Log-MAP) algorithm and Max-Log-MAP algorithm, but these two algorithms have high complexity and high bit error rate, respectively. In this paper, we propose a normalized Log-MAP (Nor-Log-MAP) decoding algorithm in which the function max* is approximated by using a fixed normalized factor multiplied by the max function. Combining a Nor-Log-MAP algorithm with a LUT-Log-MAP algorithm creates a new kind of LUT-Nor-Log-MAP algorithm. Compared with the LUT-Log-MAP algorithm, the decoding performance of the LUT-Nor-Log-MAP algorithm is close to that of the LUT-Log-MAP algorithm. Based on the decoding method of the Nor-Log-MAP algorithm, we also put forward a normalization functional unit (NFU) for a soft-input soft-output (SISO) decoder computing unit. The simulation results show that the LUT-Nor-Log-MAP algorithm can save about 2.1% of logic resources compared with the LUT-Log-MAP algorithm. Compared with the Max-Log-MAP algorithm, the LUT-Nor-Log-MAP algorithm shows a gain of 0.25~0.5 dB in decoding performance. Using the Cyclone IV platform, the designed Turbo decoder can achieve a throughput of 36 Mbit/s under a maximum clock frequency of 44 MHz.


2014 ◽  
Vol 63 (3) ◽  
pp. 531-537 ◽  
Author(s):  
Maurizio Martina ◽  
Stylianos Papaharalabos ◽  
P. Takis Mathiopoulos ◽  
Guido Masera

Author(s):  
SANTOSH GOORU ◽  
DR. S. RAJARAM

Recent wireless communication standards such as 3GPP-LTE, WiMax, DVB-SH and HSPA incorporates turbo code for its excellent performance. This work provides an overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon Limit. It starts with a brief discussion on turbo encoding, and then move on to describing the form of the iterative decoder most commonly used to decode turbo codes. Here, Turbo decoder uses original MAP algorithm instead of using the approximated Max log-MAP algorithm thereby it reduces the number iterations to decode the transmitted information bits. This paper presents the FPGA (Field Programmable Gate Array) implementation simulation results for Turbo encoder and decoder structure for 3GPP-LTE standard.


2021 ◽  
Author(s):  
Li Zhang ◽  
weihong fu ◽  
Fan Shi ◽  
Chunhua Zhou ◽  
Yongyuan Liu

Abstract A neural network-based decoder, based on a long short-term memory (LSTM) network, is proposed to solve the problem of high decoding delay caused by the poor parallelism of existing decoding algorithms for turbo codes. The powerful parallel computing and feature learning ability of neural networks can reduce the decoding delay of turbo codes and bit error rates simultaneously. The proposed decoder refers to a unique component coding concept of turbo codes. First, each component decoder is designed based on an LSTM network. Next, each layer of the component decoder is trained, and the trained weights are loaded into the turbo code decoding neural network as initialization parameters. Then, the turbo code decoding network is trained end-to-end. Finally, a complete turbo decoder is realized. Simulation results show that the performance of the proposed decoder is improved by 0.5–1.5 dB compared with the traditional serial decoding algorithm in Gaussian white noise and t-distribution noise. Furthermore, the results demonstrate that the proposed decoder can be used in communication systems with various turbo codes and that it solves the problem of high delay in serial iterative decoding.


Sign in / Sign up

Export Citation Format

Share Document