codeword length
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Hasan Aldiabat ◽  
Nedal Al-ababneh

In this paper, the bandwidth density of misaligned free space optical interconnects (FSOIs) system with and without coding under a fixed bit error rate is considered. In particular, we study the effect of using error correction codes of various codeword lengths on the bandwidth density and misalignment tolerance of the FSOIs system in the presence of higher order modes. Moreover, the paper demonstrates the use of the fill factor of the detector array as a design parameter to optimize the bandwidth density of the communication. The numerical results demonstrate that the bandwidth density improves significantly with coding and the improvement is highly dependent on the used codeword length and code rate. In addition, the results clearly show the optimum fill factor values that achieve the maximum bandwidth density and misalignment tolerance of the system.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 65
Author(s):  
Jesús E. Garca ◽  
Verónica A. González-López ◽  
Gustavo H. Tasca ◽  
Karina Y. Yaginuma

In the framework of coding theory, under the assumption of a Markov process (Xt) on a finite alphabet A, the compressed representation of the data will be composed of a description of the model used to code the data and the encoded data. Given the model, the Huffman’s algorithm is optimal for the number of bits needed to encode the data. On the other hand, modeling (Xt) through a Partition Markov Model (PMM) promotes a reduction in the number of transition probabilities needed to define the model. This paper shows how the use of Huffman code with a PMM reduces the number of bits needed in this process. We prove the estimation of a PMM allows for estimating the entropy of (Xt), providing an estimator of the minimum expected codeword length per symbol. We show the efficiency of the new methodology on a simulation study and, through a real problem of compression of DNA sequences of SARS-CoV-2, obtaining in the real data at least a reduction of 10.4%.


Author(s):  
R. Asokan ◽  
T. Vijayakumar

Noise can scramble a message that is sent. This is true for both voicemails and digital communications transmitted to and from computer systems. During transmission, mistakes tend to happen. Computer memory is the most commonplace to use Hamming code error correction. With extra parity/redundancy bits added to Hamming code, single-bit errors may be detected and corrected. Short-distance data transmissions often make use of Hamming coding. The redundancy bits are interspersed and evacuated subsequently when scaling it for longer data lengths. The new hamming code approach may be quickly and easily adapted to any situation. As a result, it's ideal for sending large data bitstreams since the overhead bits per data bit ratio is much lower. The investigation in this article is extended Hamming codes for product codes. The proposal particularly emphasises on how well it functions with low error rate, which is critical for multimedia wireless applications. It provides a foundation and a comprehensive set of methods for quantitatively evaluating this performance without the need of time-consuming simulations. It provides fresh theoretical findings on the well-known approximation, where the bit error rate roughly equal to the frame error rate times the minimal distance to the codeword length ratio. Moreover, the analytical method is applied to actual design considerations such as shorter and punctured codes along with the payload and redundancy bits calculation. Using the extended identity equation on the dual codes, decoding can be done at the first instance. The achievement of 43.48% redundancy bits is obtained during the testing process which is a huge proportion reduced in this research work.


Author(s):  
Alireza Hasani ◽  
Lukasz Lopacinski ◽  
Rolf Kraemer

AbstractLayered decoding (LD) facilitates a partially parallel architecture for performing belief propagation (BP) algorithm for decoding low-density parity-check (LDPC) codes. Such a schedule for LDPC codes has, in general, reduced implementation complexity compared to a fully parallel architecture and higher convergence rate compared to both serial and parallel architectures, regardless of the codeword length or code-rate. In this paper, we introduce a modified shuffling method which shuffles the rows of the parity-check matrix (PCM) of a quasi-cyclic LDPC (QC-LDPC) code, yielding a PCM in which each layer can be produced by the circulation of its above layer one symbol to the right. The proposed shuffling scheme additionally guarantees the columns of a layer of the shuffled PCM to be either zero weight or single weight. This condition has a key role in further decreasing LD complexity. We show that due to these two properties, the number of occupied look-up tables (LUTs) on a field programmable gate array (FPGA) reduces by about 93% and consumed on-chip power by nearly 80%, while the bit error rate (BER) performance is maintained. The only drawback of the shuffling is the degradation of decoding throughput, which is negligible for low values of $$E_b/N_0$$ E b / N 0 until the BER of 1e−6.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 983
Author(s):  
Jingjian Li ◽  
Wei Wang ◽  
Hong Mo ◽  
Mengting Zhao ◽  
Jianhua Chen

A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate.


2021 ◽  
Vol 2(50) ◽  
Author(s):  
Ala Kobozeva ◽  
◽  
Arteom Sokolov ◽  

Today, steganographic systems with multiple access are of considerable importance. In such sys-tems, the orthogonal Walsh-Hadamard transform is most often used for multiplexing and divid-ing channels, which leads to the need for efficient coding of the Walsh-Hadamard transform coefficients for the convenience of their subsequent embedding. The purpose of the research is to develop a theoretical basis for efficient coding of the embedded signal in steganographic sys-tems with multiple access with an arbitrary number of users N, based on MC-CDMA technology. This purpose was fulfilled by forming the theoretical basis for constructing effective codes de-signed to encode the embedded signal in steganographic systems with multiple access. The most important results obtained are the proposed and proven relations that determine both the possible values of the Walsh-Hadamard transform coefficients, for a given value of the number of divid-ed channels, and the probability of occurrence of the given values of the Walsh-Hadamard transform coefficients, which allow the construction of effective codes to represent the embed-ded signal. In the case of the number of divided channels N=4, we propose to use a constant amplitude code that provides a smaller value of the average codeword length in comparison with the Huffman code, while the constructed code has correcting capabilities. The significance of the obtained results is determined by the possibility of using the developed theoretical basis when constructing effective codes for encoding the embedded signal in steganographic systems with multiple access at an arbitrary value of the number of divided channels N.


2019 ◽  
Vol 69 (3) ◽  
pp. 274-279
Author(s):  
Jimmy B. Tamakuwala

Most of the digital communication system uses forward error correction (FEC) in addition with interleaver to achieve reliable communication over a noisy channel. To get useful information from intercepted data, in non-cooperative context, it is necessary to have algoritihms for blind identification of FEC code and interleaver parameters. In this paper, a matrix rank-based algebraic algorithm for the joint and blind identification of block interleaved convolution code parameters for cases, where interleaving length is not necessarily an integer multiple of codeword length is presented. From simulations, it is observed that the code rate and block interleaver length are identified correctly with probability of detection equal to 1 for bit error rate values of less than or equal to 10-4.


Sign in / Sign up

Export Citation Format

Share Document