scholarly journals An illegal user model of communication channel with error correction codeс

2021 ◽  
Vol 2131 (2) ◽  
pp. 022119
Author(s):  
N S Mogilevskaya ◽  
V V Dolgov

Abstract The situation when an illegal user intercepts data from the communication channel of two legal users is considered. The data in the legal channel is protected from distortion by error correction codes. It is assumed that the intercept channel may be noisier than the legal channel. And, consequently, the interceptor receives low-quality data. The possibility of using a special type of error-correction code decoders by an illegal user to restore damaged data is discussed. The interceptor model is constructed. The model includes a receiving block of the intercept channel, as well as several auxiliary blocks, such as a memory device, a block for determining the quality of communication in the legal channel, databases with possible decoders and their parameters. Examples are given that show the potential capabilities of the interceptor. The examples demonstrate the difference in decoding quality between different decoders for LDPC codes and Reed-Muller codes. These examples show that an illegal user with the various decoders described in the open press can receive information with satisfactory quality even at a weak signal level in the intercept channel. The constructed model can be useful in the tasks of developing methods of protection against intruders who organize illegitimate data interception channels.

2013 ◽  
Vol 760-762 ◽  
pp. 96-100
Author(s):  
Zong Li Lai ◽  
Wen Tao Xu

Under the development of society, the dissemination of information plays an increasingly significant role. How to achieve the goal of continually reducing the error rate and enhance the quality of communication and construct a highly reliable, efficient and high-speed Broadband Communication System is really a tough task. Here comes the FEC that is one particular type of error correction codes which is introduced to protect the process of data transmitting. In addition to a brief introduction to FEC, this article covers the categories of FEC and their applications along with comparisons and also describes the latest development of these error correction algorithms.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jianguo Xie ◽  
Han Wu ◽  
Chao Xia ◽  
Peng Ding ◽  
Helun Song ◽  
...  

AbstractSemiconductor superlattice secure key distribution (SSL-SKD) has been experimentally demonstrated to be a novel scheme to generate and agree on the identical key in unconditional security just by public channel. The error correction in the information reconciliation procedure is introduced to eliminate the inevitable differences of analog systems in SSL-SKD. Nevertheless, the error correction has been proved to be the performance bottleneck of information reconciliation for high computational complexity. Hence, it determines the final secure key throughput of SSL-SKD. In this paper, different frequently-used error correction codes, including BCH codes, LDPC codes, and Polar codes, are optimized separately to raise the performance, making them usable in practice. Firstly, we perform multi-threading to support multi-codeword decoding for BCH codes and Polar codes and updated value calculation for LDPC codes. Additionally, we construct lookup tables to reduce redundant calculations, such as logarithmic table and antilogarithmic table for finite field computation. Our experimental results reveal that our proposed optimization methods can significantly promote the efficiency of SSL-SKD, and three error correction codes can reach the throughput of Mbps and provide a minimum secure key rate of 99%.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Haiteng Zhang ◽  
Zhiqing Shao ◽  
Hong Zheng ◽  
Jie Zhai

In the early service transactions, quality of service (QoS) information was published by service provider which was not always true and credible. For better verification the trust of the QoS information was provided by the Web service. In this paper, the factual QoS running data are collected by our WS-QoS measurement tool; based on these objectivity data, an algorithm compares the difference of the offered and measured quality data of the service and gives the similarity, and then a reputation evaluation method computes the reputation level of the Web service based on the similarity. The initial implementation and experiment with three Web services' example show that this approach is feasible and these values can act as the references for subsequent consumers to select the service.


2020 ◽  
Vol 226 ◽  
pp. 02006
Author(s):  
Jan Broulím ◽  
Alexander Ayriyan ◽  
Hovik Grigorian

Error correction plays a crucial role when transmitting data from the source to the destination through a noisy channel. It has found many applications in television broadcasting services, data transmission in radiation harsh environment (e. g. space probes or physical experiments) or memory storages influenced by Single Event Effects (SEE). Low Density Parity Check (LDPC) codes provide an important technique to correct these errors. The parameters of error correction depend both on the decoding algorithm and on the LDPC code given by the parity-check matrix. Therefore, a particular design of the paritycheck matrix is necessary. Moreover, with the development of high performance computing, the application of genetic optimization algorithms to design the parity-check matrices has been enabled. In this article, we present the application of the genetic optimization algorithm to produce error correcting codes with special properties, especially the burst types of errors. The results show the bounds of correction capabilities for various code lengths and various redundancies of LDPC codes. This is particularly useful when designing systems under the influence of noise combined with the application of the error correction codes.


2012 ◽  
Vol 26 (20) ◽  
pp. 1250118
Author(s):  
YUAN LI ◽  
MANTAO XU ◽  
YINKUO MENG ◽  
YING GUO

Graphical approach provides a direct way to construct error correction codes. Motivated by its good properties, associating low-density parity-check (LDPC) codes, in this paper we present families of graphical quantum LDPC codes which contain no girth of four. Because of the fast algorithm of constructing for graphical codes, the proposed quantum codes have lower encoding complexity.


1998 ◽  
Vol 3 (5) ◽  
pp. 8-10
Author(s):  
Robert L. Knobler ◽  
Charles N. Brooks ◽  
Leon H. Ensalada ◽  
James B. Talmage ◽  
Christopher R. Brigham

Abstract The author of the two-part article about evaluating reflex sympathetic dystrophy (RSD) responds to criticisms that a percentage impairment score may not adequately reflect the disability of an individual with RSD. The author highlights the importance of recognizing the difference between impairment and disability in the AMA Guides to the Evaluation of Permanent Impairment (AMA Guides): impairment is the loss, loss of use, or derangement of any body part, system, or function; disability is a decrease in or the loss or absence of the capacity to meet personal, social, or occupational demands or to meet statutory or regulatory requirements because of an impairment. The disparity between impairment and disability can be encountered in diverse clinical scenarios. For example, a person's ability to resume occupational activities following a major cardiac event depends on medical, social, and psychological factors, but nonmedical factors appear to present the greatest impediment and many persons do not resume work despite significant improvements in functional capacity. A key requirement according to the AMA Guides is objective documentation, and the author agrees that when physicians consider the disability evaluation of people, more issues than those relating to the percentage loss of function should be considered. More study of the relationships among impairment, disability, and quality of life in patients with RSD are required.


2020 ◽  
Vol 7 (2) ◽  
pp. 34-41
Author(s):  
VLADIMIR NIKONOV ◽  
◽  
ANTON ZOBOV ◽  

The construction and selection of a suitable bijective function, that is, substitution, is now becoming an important applied task, particularly for building block encryption systems. Many articles have suggested using different approaches to determining the quality of substitution, but most of them are highly computationally complex. The solution of this problem will significantly expand the range of methods for constructing and analyzing scheme in information protection systems. The purpose of research is to find easily measurable characteristics of substitutions, allowing to evaluate their quality, and also measures of the proximity of a particular substitutions to a random one, or its distance from it. For this purpose, several characteristics were proposed in this work: difference and polynomial, and their mathematical expectation was found, as well as variance for the difference characteristic. This allows us to make a conclusion about its quality by comparing the result of calculating the characteristic for a particular substitution with the calculated mathematical expectation. From a computational point of view, the thesises of the article are of exceptional interest due to the simplicity of the algorithm for quantifying the quality of bijective function substitutions. By its nature, the operation of calculating the difference characteristic carries out a simple summation of integer terms in a fixed and small range. Such an operation, both in the modern and in the prospective element base, is embedded in the logic of a wide range of functional elements, especially when implementing computational actions in the optical range, or on other carriers related to the field of nanotechnology.


Author(s):  
V. Dumych ◽  

The purpose of research: to improve the technology of growing flax in the Western region of Ukraine on the basis of the introduction of systems for minimizing tillage, which will increase the yield of trusts and seeds. Research methods: field, laboratory, visual and comparative calculation method. Research results: Field experiments included the study of three tillage systems (traditional, canning and mulching) and determining their impact on growth and development and yields of trusts and flax seeds. The traditional tillage system included the following operations: plowing with a reversible plow to a depth of 27 cm, cultivation with simultaneous harrowing and pre-sowing tillage. The conservation system is based on deep shelfless loosening of the soil and provided for chiseling to a depth of 40 cm, disking to a depth of 15 cm, cultivation with simultaneous harrowing, pre-sowing tillage. During the implementation of the mulching system, disking to a depth of 15 cm, cultivation with simultaneous harrowing and pre-sowing tillage with a combined unit was carried out. Tillage implements and machines were used to perform tillage operations: disc harrow BDVP-3,6, reversible plow PON-5/4, chisel PCh-3, cultivator KPSP-4, pre-sowing tillage unit LK-4. The SZ-3,6 ASTPA grain seeder was used for sowing long flax of the Kamenyar variety. Simultaneously with the sowing of flax seeds, local application of mineral fertilizers (nitroammophoska 2 c/ha) was carried out. The application of conservation tillage allows to obtain the yield of flax trust at the level of 3,5 t/ha, which is 0,4 t/ha (12.9 %) more than from the area of traditional tillage and 0,7 t/ha (25 %) in comparison with mulching. In the area with canning treatment, the seed yield was the highest and amounted to 0,64 t/ha. The difference between this option and traditional and mulching tillage reaches 0,06 t/ha (10,3 %) and 0.10 t/ha (18.5 %), respectively. Conclusions. Preservation tillage, which is based on shelf-free tillage to a depth of 40 cm and disking to a depth of 15 cm has a positive effect on plant growth and development, yield and quality of flax.


Sign in / Sign up

Export Citation Format

Share Document