scholarly journals On the Failing Cases of the Johnson Bound for Error-Correcting Codes

10.37236/779 ◽  
2008 ◽  
Vol 15 (1) ◽  
Author(s):  
Wolfgang Haas

A central problem in coding theory is to determine $A_q(n,2e+1)$, the maximal cardinality of a $q$-ary code of length $n$ correcting up to $e$ errors. When $e$ is fixed and $n$ is large, the best upper bound for $A(n,2e+1)$ (the binary case) is the well-known Johnson bound from 1962. This however simply reduces to the sphere-packing bound if a Steiner system $S(e+1,2e+1,n)$ exists. Despite the fact that no such system is known whenever $e\geq 5$, they possibly exist for a set of values for $n$ with positive density. Therefore in these cases no non-trivial numerical upper bounds for $A(n,2e+1)$ are known. In this paper the author demonstrates a technique for upper-bounding $A_q(n,2e+1)$, which closes this gap in coding theory. The author extends his earlier work on the system of linear inequalities satisfied by the number of elements of certain codes lying in $k$-dimensional subspaces of the Hamming Space. The method suffices to give the first proof, that the difference between the sphere-packing bound and $A_q(n,2e+1)$ approaches infinity with increasing $n$ whenever $q$ and $e\geq 2$ are fixed. A similar result holds for $K_q(n,R)$, the minimal cardinality of a $q$-ary code of length $n$ and covering radius $R$. Moreover the author presents a new bound for $A(n,3)$ giving for instance $A(19,3)\leq 26168$.

Author(s):  
Issam Abderrahman Joundan ◽  
Said Nouh ◽  
Mohamed Azouazi ◽  
Abdelwahed Namir

<span>BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity.  ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.</span>


Author(s):  
Rohitkumar R Upadhyay

Abstract: Hamming codes for all intents and purposes are the first nontrivial family of error-correcting codes that can actually correct one error in a block of binary symbols, which literally is fairly significant. In this paper we definitely extend the notion of error correction to error-reduction and particularly present particularly several decoding methods with the particularly goal of improving the error-reducing capabilities of Hamming codes, which is quite significant. First, the error-reducing properties of Hamming codes with pretty standard decoding definitely are demonstrated and explored. We show a sort of lower bound on the definitely average number of errors present in a decoded message when two errors for the most part are introduced by the channel for for all intents and purposes general Hamming codes, which actually is quite significant. Other decoding algorithms are investigated experimentally, and it generally is definitely found that these algorithms for the most part improve the error reduction capabilities of Hamming codes beyond the aforementioned lower bound of for all intents and purposes standard decoding. Keywords: coding theory, hamming codes, hamming distance


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
You Gao ◽  
Gang Wang

The Sphere-packing bound, Singleton bound, Wang-Xing-Safavi-Naini bound, Johnson bound, and Gilbert-Varshamov bound on the subspace codesn+l,M,d,(m,1)qbased on subspaces of type(m,1)in singular linear spaceFq(n+l)over finite fieldsFqare presented. Then, we prove that codes based on subspaces of type(m,1)in singular linear space attain the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures inFq(n+l).


2012 ◽  
Vol 85 (2) ◽  
Author(s):  
A. Ramezanpour ◽  
R. Zecchina
Keyword(s):  

Author(s):  
Mohammed Ahmed Magzoub ◽  
Azlan Abd Aziz ◽  
Mohammed Ahmed Salem ◽  
Hadhrami Ab Ghani ◽  
Azlina Abdul Aziz ◽  
...  

Despite the rapid growth in the market demanding for wireless sensor networks (WSNs), they are far from being secured or efficient. WSNs are vulnerable to malicious attacks and utilize too much power. At the same time, there is a significant increment of the security threats due to the growth of the several applications that employ wireless sensor networks. Therefore, introducing physical layer security is considered to be a promising solution to mitigate the threats. This paper evaluates popular coding techniques like Reed solomon (RS) techniques and scrambled error correcting codes specifically in terms of security gap. The difference between the signal to nose ratio (SNR) of the eavesdropper and the legitimate receiver nodes is defined as the security gap. We investigate the security gap, energy efficiency, and bit error rate between RS and scrambled t-error correcting codes for wireless sensor networks. Lastly, energy efficiency in RS and Bose-Chaudhuri-Hocquenghem (BCH) is also studied. The results of the simulation emphasize that RS technique achieves similar security gap as scrambled error correcting codes. However, the analysis concludes that the computational complexities of the RS is less compared to the scrambled error correcting codes. We also found that BCH code is more energy-efficient than RS.


Author(s):  
Ibrahim A. A. ◽  

Finite fields is considered to be the most widely used algebraic structures today due to its applications in cryptography, coding theory, error correcting codes among others. This paper reports the use of extended Euclidean algorithm in computing the greatest common divisor (gcd) of Aunu binary polynomials of cardinality seven. Each class of the polynomial is permuted into pairs until all the succeeding classes are exhausted. The findings of this research reveals that the gcd of most of the pairs of the permuted classes are relatively prime. This results can be used further in constructing some cryptographic architectures that could be used in design of strong encryption schemes.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


Sign in / Sign up

Export Citation Format

Share Document