scholarly journals A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Johnny W. H. Kao ◽  
Stevan M. Berber ◽  
Abbas Bigdeli

A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN) is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER) performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA). The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

In this paper reduction of errors in turbo decoding is done using neural network. Turbo codes was one of the first thriving attempt for obtaining error correcting performance in the vicinity of the theoretical Shannon bound of –1.6 db. Parallel concatenated encoding and iterative decoding are the two techniques available for constructing turbo codes. Decrease in Eb/No necessary to get a desired bit-error rate (BER) is achieved for every iteration in turbo decoding. But the improvement in Eb/No decreases for each iteration. From the turbo encoder, the output is taken and this is added with noise, when transmitting through the channel. The noisy data is fed as an input to the neural network. The neural network is trained for getting the desired target. The desired target is the encoded data. The turbo decoder decodes the output of neural network. The neural network help to reduce the number of errors. Bit error rate of turbo decoder trained using neural network is less than the bit error rate of turbo decoder without training.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Hocine Fekih ◽  
Boubakar Seddik Bouazza ◽  
Keltoum Nouri

AbstractRecently, using iterative decoding algorithms to achieve an interesting bit error rate for spectrally efficient modulation become a necessity for optical transmission, in this paper, we propose a coded modulation scheme based on bit interleaving circular recursive systematic convolutional (CRSC) code and 16-QAM modulation. The proposal system considered as a serial concatenation of a channel encoder, a bit interleaver and M-ary modulator can be flexible easy to implement using a short code length. For a spectral efficiency $\eta =3\text{bit}/s/Hz$, the coding gain at a bit error rate of 10−6 is about 8 dB.


Micromachines ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 31
Author(s):  
Qianwu Zhang ◽  
Zicong Wang ◽  
Shuaihang Duan ◽  
Bingyao Cao ◽  
Yating Wu ◽  
...  

In this paper, an improved end-to-end autoencoder based on reinforcement learning by using Decision Tree for optical transceivers is proposed and experimentally demonstrated. Transmitters and receivers are considered as an asymmetrical autoencoder combining a deep neural network and the Adaboost algorithm. Experimental results show that 48 Gb/s with 7% hard-decision forward error correction (HD-FEC) threshold under 65 km standard single mode fiber (SSMF) is achieved with proposed scheme. Moreover, we further experimentally study the Tree depth and the number of Decision Tree, which are the two main factors affecting the bit error rate performance. Experimental research afterwards showed that the effect from the number of Decision Tree as 30 on bit error rate (BER) flattens out under 48 Gb/s for the fiber range from 25 km and 75 km SSMF, and the influence of Tree depth on BER appears to be a gentle point when Tree Depth is 5, which is defined as the optimal depth point for aforementioned fiber range. Compared to the autoencoder based on a Fully-Connected Neural Network, our algorithm uses addition operations instead of multiplication operations, which can reduce computational complexity from 108 to 107 in multiplication and 106 to 108 in addition on the training phase.


1994 ◽  
Vol 1 (1) ◽  
pp. 72-75 ◽  
Author(s):  
J-M.P. Delavaux ◽  
Y.K. Park ◽  
V. Mizrahi ◽  
D.J. Digiovanni

2015 ◽  
Vol 719-720 ◽  
pp. 750-755
Author(s):  
Xuan Feng Qiu ◽  
Hong Yu Zhao ◽  
Ping Zhi Fan

For high data rate transmission, the parallel turbo decoding can guarantee high communication reliability as well as reduce the decoding latency greatly. A challenge in the design of parallel turbo decoder is the design of collision-free interleavers in order to avoid the memory access collision in the decoding process. In this letter, a novel algorithm for the generation of collision free S-random interleavers of turbo codes is proposed. The proposed algorithm can obtain larger spreading factor, S, and better average code distance than the existing collision free row column S-random interleavers.


Data mining is an interdisciplinary science which exploits different methods including statistics, pattern recognition, machine learning, and database to extract the knowledge hidden in huge datasets. In this paper, we sought to develop a model for paying pregnancy period wages compensation to the Social Security Organization (SSO) clients by using data mining techniques. The SSO is a public insurance organization, the main mission of which is to cover the stipendiary workers (mandatory) and self-employed people (optional). In order to develop the proposed model, 5931 samples were selected randomly from 11504 clients. Then the K-Means clustering algorithm was employed to divide data into cluster 1, consisting of 2732 samples, and cluster 2, consisting of 3199 samples. In each cluster, the data were divided into training and test sets with a ratio of 90 to 10. Then a multi-layer perceptron neural network was trained separately for each cluster. This paper utilized the MLP network model. The tanh transfer function was used as the activation function in the hidden activation layer. Numerous tests were conducted to develop the best neural network structure with the lowest error rate. It consisted of two hidden layers. There were 5 neurons in the first layer and 4 neurons in the second. Therefore, the neural network structure was in the 5-4-1 format. Finally, the best model was selected by using the error evaluation method. The MAPE and R2 criteria were employed to evaluate the proposed model. Regarding the test data, the result was 0.96 for cluster 1 and 0.95 for cluster 2. The proposed method produced a lower error rate than the other existing models.


Sign in / Sign up

Export Citation Format

Share Document