scholarly journals Upper Bound on Correcting Partial Random Errors

2013 ◽  
Vol 13 (3) ◽  
pp. 41-49
Author(s):  
Ankita Gaur ◽  
Bhu Dev Sharma

Abstract Since coding has become a basic tool for practically all communication/electronic devices, it is important to carefully study the error patterns that actually occur. This allows correction of only partial errors rather than those which have been studied using Hamming distance, in non-binary cases. The paper considers a class of distances, SK-distances, in terms of which partial errors can be defined. Examining the sufficient condition for the existence of a parity check matrix for a given number of parity-checks, the paper contains an upper bound on the number of parity check digits for linear codes with property that corrects all partial random errors of an (n, k ) code with minimum SK-distance at least d. The result generalizes the rather widely used Varshamov-Gilbert bound, which follows from it as a particular case.

2017 ◽  
Vol 09 (04) ◽  
pp. 1750051
Author(s):  
Vinod Tyagi ◽  
Ambika Tyagi

Byte correcting perfect codes are developed to correct burst errors within bytes. If a code is byte correcting code and we say that the code is [Formula: see text]-burst correcting, meaning that it corrects a single burst of length [Formula: see text] or less within a byte. A byte correcting code is such that if [Formula: see text]; [Formula: see text] denote the set of syndromes obtained from the [Formula: see text]th-byte of the parity check matrix [Formula: see text] and [Formula: see text]; [Formula: see text] denote the set of syndromes obtained from the [Formula: see text]th-byte of the parity check matrix [Formula: see text] then [Formula: see text]. In an [Formula: see text] code, if there are [Formula: see text] bytes of size [Formula: see text], then [Formula: see text]. Byte correcting codes are preferred where information stored in all bytes are equally important. But there are cases where some parts of the message are more important than other parts of the message, for example, if we have to transmit a message on a border location “ Shift Battalion (Bn) from Location A to Location B” then, we will focus more on the information shift, location [Formula: see text] and location [Formula: see text] i.e., bytes containing these information will be more important than others. In this situation, it is needed that during transmission these bytes should have no possibility of error. In other words, these bytes should be protected absolutely against any error. Keeping this in mind, we study burst error correcting capabilities of byte oriented codes in terms of byte protection level of each byte. If there is a byte error pattern of length [Formula: see text] in the transmission then all those bytes of the received pattern will be decoded correctly whose burst protection level is [Formula: see text] or more even though the code word may be decoded wrongly. Taking the code length [Formula: see text] to be divided into [Formula: see text] bytes with burst protection level of the [Formula: see text]th-byte as [Formula: see text]; [Formula: see text]; [Formula: see text], we construct linear codes that we call byte protecting burst (BPB) codes and investigate their byte protecting capabilities in this paper.


2013 ◽  
Vol 2 (1) ◽  
pp. 143-150
Author(s):  
P.K. Das

Detecting and correcting errors is one of the main tasks in coding theory. The bounds are important in terms of error-detecting and -correcting capabilities of the codes. Solid Burst error is common in several communication channels. This paper obtains lower and upper bounds on the number of parity-check digits required for linear codes capable of correcting any solid burst error of length b or less and simultaneously detecting any solid burst error of length s(>b) or less. Illustration of such a code is also provided.Keywords: Parity check matrix, Syndromes, Solid burst errors, Standard arrayDOI: 10.18495/comengapp.21.143150  


2021 ◽  
Vol 4 (9(112)) ◽  
pp. 46-53
Author(s):  
Viktor Durcek ◽  
Michal Kuba ◽  
Milan Dado

This paper investigates the construction of random-structure LDPC (low-density parity-check) codes using Progressive Edge-Growth (PEG) algorithm and two proposed algorithms for removing short cycles (CB1 and CB2 algorithm; CB stands for Cycle Break). Progressive Edge-Growth is an algorithm for computer-based design of random-structure LDPC codes, the role of which is to generate a Tanner graph (a bipartite graph, which represents a parity-check matrix of an error-correcting channel code) with as few short cycles as possible. Short cycles, especially the shortest ones with a length of 4 edges, in Tanner graphs of LDPC codes can degrade the performance of their decoding algorithm, because after certain number of decoding iterations, the information sent through its edges is no longer independent. The main contribution of this paper is the unique approach to the process of removing short cycles in the form of CB2 algorithm, which erases edges from the code's parity-check matrix without decreasing the minimum Hamming distance of the code. The two cycle-removing algorithms can be used to improve the error-correcting performance of PEG-generated (or any other) LDPC codes and achieved results are provided. All these algorithms were used to create a PEG LDPC code which rivals the best-known PEG-generated LDPC code with similar parameters provided by one of the founders of LDPC codes. The methods for generating the mentioned error-correcting codes are described along with simulations which compare the error-correcting performance of the original codes generated by the PEG algorithm, the PEG codes processed by either CB1 or CB2 algorithm and also external PEG code published by one of the founders of LDPC codes


2021 ◽  
Author(s):  
Surdive Atamewoue Tsafack

This chapter present some new perspectives in the field of coding theory. In fact notions of fuzzy sets and hyperstructures which are consider here as non classical structures are use in the construction of linear codes as it is doing for fields and rings. We study the properties of these classes of codes using well known notions like the orthogonal of a code, generating matrix, parity check matrix and polynomials. In some cases particularly for linear codes construct on a Krasner hyperfield we compare them with those construct on finite field called here classical structures, and we obtain that linear codes construct on a Krasner hyperfield have more codes words with the same parameters.


Sign in / Sign up

Export Citation Format

Share Document