scholarly journals Asymptotic bounds for spherical codes

Author(s):  
Юрий Иванович Манин ◽  
Yurii Ivanovich Manin ◽  
Матильда Марколли ◽  
Matilde Marcolli

The set of all error-correcting codes $C$ over a fixed finite alphabet $\mathbf{F}$ of cardinality $q$ determines the set of code points in the unit square $[0,1]^2$ with coordinates $(R(C), \delta (C))$:= (relative transmission rate, relative minimal distance). The central problem of the theory of such codes consists in maximising simultaneously the transmission rate of the code and the relative minimum Hamming distance between two different code words. The classical approach to this problem explored in vast literature consists in inventing explicit constructions of "good codes" and comparing new classes of codes with earlier ones. Less classical approach studies the geometry of the whole set of code points $(R,\delta)$ (with $q$ fixed), at first independently of its computability properties, and only afterwards turning to the problems of computability, analogies with statistical physics etc. The main purpose of this article consists in extending this latter strategy to the domain of spherical codes. Bibliography: 14 titles.


2019 ◽  
pp. 372-380
Author(s):  
Yurii Gorbenko ◽  
Anastasiia Kiian ◽  
Andriy Pushkar’ov ◽  
Oleksandr Korneiko ◽  
Serhii Smirnov ◽  
...  

In this paper the basic principles of construction and operation of McEliece and Niederreiter cryptosystems based on the use of error-correcting codes were considered. A new hybrid cryptosystem that combines the rules of encryption according to the above-mentioned schemes is proposed. Also, this paper presents the analysis and comparative studies from the standpoint of stability, the volume of public and private keys, length of ciphertext and relative speed of information transmission of the new proposed scheme and McEliece and Niederreiter cryptosystems. It is considered from an analytical point of view and with the help of graphic images. Comparative studies revealed that the hybrid cryptosystem retains the positive aspects of its predecessors, as well as allows us to increase the relative transmission rate with the preservation of the stability indicator to the classical and quantum cryptanalysis. One disadvantage is the increase in decoding time by adding information extracted as in Niederreiter scheme, but the increase in this indicator is not critical. Despite the demonstrated benefits, it remains open to all cryptosystems to reduce the amount of the used key data, which, in the case of quantum computers to maintain stability, still needs to be increased once.



Author(s):  
Jaeho Jeong ◽  
Seong-Joon Park ◽  
Jae-Won Kim ◽  
Jong-Seon No ◽  
Ha Hyeon Jeon ◽  
...  

Abstract Motivation In DNA storage systems, there are tradeoffs between writing and reading costs. Increasing the code rate of error-correcting codes may save writing cost, but it will need more sequence reads for data retrieval. There is potentially a way to improve sequencing and decoding processes in such a way that the reading cost induced by this tradeoff is reduced without increasing the writing cost. In past researches, clustering, alignment, and decoding processes were considered as separate stages but we believe that using the information from all these processes together may improve decoding performance. Actual experiments of DNA synthesis and sequencing should be performed because simulations cannot be relied on to cover all error possibilities in practical circumstances. Results For DNA storage systems using fountain code and Reed-Solomon (RS) code, we introduce several techniques to improve the decoding performance. We designed the decoding process focusing on the cooperation of key components: Hamming-distance based clustering, discarding of abnormal sequence reads, RS error correction as well as detection, and quality score-based ordering of sequences. We synthesized 513.6KB data into DNA oligo pools and sequenced this data successfully with Illumina MiSeq instrument. Compared to Erlich’s research, the proposed decoding method additionally incorporates sequence reads with minor errors which had been discarded before, and thuswas able to make use of 10.6–11.9% more sequence reads from the same sequencing environment, this resulted in 6.5–8.9% reduction in the reading cost. Channel characteristics including sequence coverage and read-length distributions are provided as well. Availability The raw data files and the source codes of our experiments are available at: https://github.com/jhjeong0702/dna-storage.



Author(s):  
Alfredo Braunstein ◽  
Marc Mézard

Methods and analyses from statistical physics are of use not only in studying the performance of algorithms, but also in developing efficient algorithms. Here, we consider survey propagation (SP), a new approach for solving typical instances of random constraint satisfaction problems. SP has proven successful in solving random k-satisfiability (k -SAT) and random graph q-coloring (q-COL) in the “hard SAT” region of parameter space [79, 395, 397, 412], relatively close to the SAT/UNSAT phase transition discussed in the previous chapter. In this chapter we discuss the SP equations, and suggest a theoretical framework for the method [429] that applies to a wide class of discrete constraint satisfaction problems. We propose a way of deriving the equations that sheds light on the capabilities of the algorithm, and illustrates the differences with other well-known iterative probabilistic methods. Our approach takes into account the clustered structure of the solution space described in chapter 3, and involves adding an additional “joker” value that variables can be assigned. Within clusters, a variable can be frozen to some value, meaning that the variable always takes the same value for all solutions (satisfying assignments) within the cluster. Alternatively, it can be unfrozen, meaning that it fluctuates from solution to solution within the cluster. As we will discuss, the SP equations manage to describe the fluctuations by assigning joker values to unfrozen variables. The overall algorithmic strategy is iterative and decomposable in two elementary steps. The first step is to evaluate the marginal probabilities of frozen variables using the SP message-passing procedure. The second step, or decimation step, is to use this information to fix the values of some variables and simplify the problem. The notion of message passing will be illustrated throughout the chapter by comparing it with a simpler procedure known as belief propagation (mentioned in ch. 3 in the context of error correcting codes) in which no assumptions are made about the structure of the solution space. The chapter is organized as follows. In section 2 we provide the general formalism, defining constraint satisfaction problems as well as the key concepts of factor graphs and cavities, using the concrete examples of satisfiability and graph coloring.



Author(s):  
Rohitkumar R Upadhyay

Abstract: Hamming codes for all intents and purposes are the first nontrivial family of error-correcting codes that can actually correct one error in a block of binary symbols, which literally is fairly significant. In this paper we definitely extend the notion of error correction to error-reduction and particularly present particularly several decoding methods with the particularly goal of improving the error-reducing capabilities of Hamming codes, which is quite significant. First, the error-reducing properties of Hamming codes with pretty standard decoding definitely are demonstrated and explored. We show a sort of lower bound on the definitely average number of errors present in a decoded message when two errors for the most part are introduced by the channel for for all intents and purposes general Hamming codes, which actually is quite significant. Other decoding algorithms are investigated experimentally, and it generally is definitely found that these algorithms for the most part improve the error reduction capabilities of Hamming codes beyond the aforementioned lower bound of for all intents and purposes standard decoding. Keywords: coding theory, hamming codes, hamming distance



2019 ◽  
Vol 116 (12) ◽  
pp. 5451-5460 ◽  
Author(s):  
Jean Barbier ◽  
Florent Krzakala ◽  
Nicolas Macris ◽  
Léo Miolane ◽  
Lenka Zdeborová

Generalized linear models (GLMs) are used in high-dimensional machine learning, statistics, communications, and signal processing. In this paper we analyze GLMs when the data matrix is random, as relevant in problems such as compressed sensing, error-correcting codes, or benchmark models in neural networks. We evaluate the mutual information (or “free entropy”) from which we deduce the Bayes-optimal estimation and generalization errors. Our analysis applies to the high-dimensional limit where both the number of samples and the dimension are large and their ratio is fixed. Nonrigorous predictions for the optimal errors existed for special cases of GLMs, e.g., for the perceptron, in the field of statistical physics based on the so-called replica method. Our present paper rigorously establishes those decades-old conjectures and brings forward their algorithmic interpretation in terms of performance of the generalized approximate message-passing algorithm. Furthermore, we tightly characterize, for many learning problems, regions of parameters for which this algorithm achieves the optimal performance and locate the associated sharp phase transitions separating learnable and nonlearnable regions. We believe that this random version of GLMs can serve as a challenging benchmark for multipurpose algorithms.





2012 ◽  
Vol 182-183 ◽  
pp. 929-932
Author(s):  
Jie Yin ◽  
Shu Yang ◽  
Chong Pan

Numerical simulation about transmission enhancement phenomenon on metal film hole array has been finished in the paper by East FDTD commercial software. In some wavelengths, relative transmission rate is more than 1 and we also study that the thickness of metal plate, the size of the hole and period on influence of the transmission rate. Transmission enhancement peak lowers with the increasing of silver film thickness, enlarger along with the increasing of the aperture, And when period of hole gets larger, transmission peak will shift.



2020 ◽  
Vol 81 (4-5) ◽  
pp. 1029-1057
Author(s):  
Cassius Manuel ◽  
Arndt von Haeseler

Abstract Models of sequence evolution typically assume that all sequences are possible. However, restriction enzymes that cut DNA at specific recognition sites provide an example where carrying a recognition site can be lethal. Motivated by this observation, we studied the set of strings over a finite alphabet with taboos, that is, with prohibited substrings. The taboo-set is referred to as $$\mathbb {T}$$ T and any allowed string as a taboo-free string. We consider the so-called Hamming graph $$\varGamma _n(\mathbb {T})$$ Γ n ( T ) , whose vertices are taboo-free strings of length n and whose edges connect two taboo-free strings if their Hamming distance equals one. Any (random) walk on this graph describes the evolution of a DNA sequence that avoids taboos. We describe the construction of the vertex set of $$\varGamma _n(\mathbb {T})$$ Γ n ( T ) . Then we state conditions under which $$\varGamma _n(\mathbb {T})$$ Γ n ( T ) and its suffix subgraphs are connected. Moreover, we provide an algorithm that determines if all these graphs are connected for an arbitrary $$\mathbb {T}$$ T . As an application of the algorithm, we show that about $$87\%$$ 87 % of bacteria listed in REBASE have a taboo-set that induces connected taboo-free Hamming graphs, because they have less than four type II restriction enzymes. On the other hand, four properly chosen taboos are enough to disconnect one suffix subgraph, and consequently connectivity of taboo-free Hamming graphs could change depending on the composition of restriction sites.



Sign in / Sign up

Export Citation Format

Share Document