error correcting codes
Recently Published Documents


TOTAL DOCUMENTS

1466
(FIVE YEARS 228)

H-INDEX

57
(FIVE YEARS 7)

2022 ◽  
Vol 22 (3) ◽  
pp. 1-25
Author(s):  
Mohammad Saidur Rahman ◽  
Ibrahim Khalil ◽  
Xun Yi ◽  
Mohammed Atiquzzaman ◽  
Elisa Bertino

Edge computing is an emerging technology for the acquisition of Internet-of-Things (IoT) data and provisioning different services in connected living. Artificial Intelligence (AI) powered edge devices (edge-AI) facilitate intelligent IoT data acquisition and services through data analytics. However, data in edge networks are prone to several security threats such as external and internal attacks and transmission errors. Attackers can inject false data during data acquisition or modify stored data in the edge data storage to hamper data analytics. Therefore, an edge-AI device must verify the authenticity of IoT data before using them in data analytics. This article presents an IoT data authenticity model in edge-AI for a connected living using data hiding techniques. Our proposed data authenticity model securely hides the data source’s identification number within IoT data before sending it to edge devices. Edge-AI devices extract hidden information for verifying data authenticity. Existing data hiding approaches for biosignal cannot reconstruct original IoT data after extracting the hidden message from it (i.e., lossy) and are not usable for IoT data authenticity. We propose the first lossless IoT data hiding technique in this article based on error-correcting codes (ECCs). We conduct several experiments to demonstrate the performance of our proposed method. Experimental results establish the lossless property of the proposed approach while maintaining other data hiding properties.


F1000Research ◽  
2022 ◽  
Vol 11 ◽  
pp. 7
Author(s):  
Chinnaiyan Senthilpari ◽  
Rosalind Deena ◽  
Lee Lini

Background: Low-density parity-check (LDPC) codes are more error-resistant than other forward error-correcting codes. Existing circuits give high power dissipation, less speed, and more occupying area. This work aimed to propose a better design and performance circuit, even in the presence of noise in the channel. Methods: In this research, the design of the multiplexer and demultiplexer were achieved using pass transistor logic. The target parameters were low power dissipation, improved throughput, and more negligible delay with a minimum area. One of the essential connecting circuits in a decoShder architecture is a multiplexer (MUX) and a demultiplexer (DEMUX) circuit. The design of the MUX and DEMUX contributes significantly to the performance of the decoder. The aim of this paper was the design of a 4 × 1 MUX to route the data bits received from the bit update blocks to the parallel adder circuits and a 1 × 4 DEMUX to receive the input bits from the parallel adder and distribute the output to the bit update blocks in a layered architecture LDPC decoder. The design uses pass transistor logic and achieves the reduction of the number of transistors used. The proposed circuit was designed using the Mentor Graphics CAD tool for 180 nm technology. Results: The parameters of power dissipation, area, and delay were considered crucial parameters for a low power decoder. The circuits were simulated using computer-aided design (CAD) tools, and the results depicted a significantly low power dissipation of 7.06 nW and 5.16 nW for the multiplexer and demultiplexer, respectively. The delay was found to be 100.5 ns (MUX) and 80 ns (DEMUX). Conclusion: This decoder’s potential use may be in low-power communication circuits such as handheld devices and Internet of Things (IoT) circuits.


2022 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
René B. Christensen ◽  
Carlos Munuera ◽  
Francisco R. F. Pereira ◽  
Diego Ruano

<p style='text-indent:20px;'>We study entanglement-assisted quantum error-correcting codes (EAQECCs) arising from classical one-point algebraic geometry codes from the Hermitian curve with respect to the Hermitian inner product. Their only unknown parameter is <inline-formula><tex-math id="M1">\begin{document}$ c $\end{document}</tex-math></inline-formula>, the number of required maximally entangled quantum states since the Hermitian dual of an AG code is unknown. In this article, we present an efficient algorithmic approach for computing <inline-formula><tex-math id="M2">\begin{document}$ c $\end{document}</tex-math></inline-formula> for this family of EAQECCs. As a result, this algorithm allows us to provide EAQECCs with excellent parameters over any field size.</p>


2021 ◽  
Vol 3 (2) ◽  
Author(s):  
El Miloud Ar-Reyouchi ◽  
Salma Rattal ◽  
Kamal Ghoumid

Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 5
Author(s):  
Francisco Revson Fernandes Pereira ◽  
Stefano Mancini

A general framework describing the statistical discrimination of an ensemble of quantum channels is given by the name quantum reading. Several tools can be applied in quantum reading to reduce the error probability in distinguishing the ensemble of channels. Classical and quantum codes can be envisioned for this goal. The aim of this paper is to present a simple but fruitful protocol for this task using classical error-correcting codes. Three families of codes are considered: Reed–Solomon codes, BCH codes, and Reed–Muller codes. In conjunction with the use of codes, we also analyze the role of the receiver. In particular, heterodyne and Dolinar receivers are taken into consideration. The encoding and measurement schemes are connected by the probing step. As probes, we consider coherent states. In such a simple manner, interesting results are obtained. As we show, there is a threshold below which using codes surpass optimal and sophisticated schemes for any fixed rate and code. BCH codes in conjunction with Dolinar receiver turn out to be the optimal strategy for error mitigation in quantum reading.


Author(s):  
Rohitkumar R Upadhyay

Abstract: Hamming codes for all intents and purposes are the first nontrivial family of error-correcting codes that can actually correct one error in a block of binary symbols, which literally is fairly significant. In this paper we definitely extend the notion of error correction to error-reduction and particularly present particularly several decoding methods with the particularly goal of improving the error-reducing capabilities of Hamming codes, which is quite significant. First, the error-reducing properties of Hamming codes with pretty standard decoding definitely are demonstrated and explored. We show a sort of lower bound on the definitely average number of errors present in a decoded message when two errors for the most part are introduced by the channel for for all intents and purposes general Hamming codes, which actually is quite significant. Other decoding algorithms are investigated experimentally, and it generally is definitely found that these algorithms for the most part improve the error reduction capabilities of Hamming codes beyond the aforementioned lower bound of for all intents and purposes standard decoding. Keywords: coding theory, hamming codes, hamming distance


Author(s):  
В’ячеслав Васильович Москаленко ◽  
Микола Олександрович Зарецький ◽  
Альона Сергіївна Москаленко ◽  
Артем Геннадійович Коробов ◽  
Ярослав Юрійович Ковальський

A machine learningsemi-supervised method was developed for the classification analysis of defects on the surface of the sewer pipe based on CCTV video inspection images. The aim of the research is the process of defect detection on the surface of sewage pipes. The subject of the research is a machine learning method for the classification analysis of sewage pipe defects on video inspection images under conditions of a limited and unbalanced set of labeled training data. A five-stage algorithm for classifier training is proposed. In the first stage, contrast training occurs using the instance-prototype contrast loss function, where the normalized Euclidean distance is used to measure the similarity of the encoded samples. The second step considers two variants of regularized loss functions – a triplet NCA function and a contrast-center loss function. The regularizing component in the second stage of training is used to penalize the rounding error of the output feature vector to a discrete form and ensures that the principle of information bottlenecking is implemented. The next step is to calculate the binary code of each class to implement error-correcting codes, but considering the structure of the classes and the relationships between their features. The resulting prototype vector of each class is used as a label of image for training using the cross-entropy loss function.  The last stage of training conducts an optimization of the parameters of the decision rules using the information criterion to consider the variance of the class distribution in Hamming binary space. A micro-averaged metric F1, which is calculated on test data, is used to compare learning outcomes at different stages and within different approaches. The results obtained on the Sewer-ML open dataset confirm the suitability of the training method for practical use, with an F1 metric value of 0.977. The proposed method provides a 9 % increase in the value of the micro-averaged F1 metric compared to the results obtained using the traditional method.


Doklady BGUIR ◽  
2021 ◽  
Vol 19 (7) ◽  
pp. 31-39
Author(s):  
A. A. Budzko ◽  
T. N. Dvornikova

The work is devoted to the development of circuits for fast Walsh transform processors of the serialparallel type. The fast Walsh transform processors are designed for decoding error-correcting codes and synchronization; their use can reduce the cost of calculating the instantaneous Walsh spectrum by almost 2 times. The class of processors for computing the instantaneous spectrum according to Walsh is called serialparallel processors. Circuits of the fast Walsh transform processors of serial-parallel type have been developed. A comparative analysis of the constructed graphs of the fast Walsh transform processors is carried out. A method and a processor for calculating the Walsh transform coefficients are proposed, which allows increasing the speed of the transformations performed. When calculating the conversion coefficients using processors of parallel, serial and serial-parallel types, it was found that controllers of the serial-parallel type require 2(N–1) operations when calculating the instantaneous spectrum according to Walsh. The results obtained can be used in the design of discrete information processing devices, in telecommunication systems when coding signals for their noise-immune transmission and decoding, which ensures the optimal number of operations, and therefore the optimal hardware costs.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1494
Author(s):  
Christopher Hillar ◽  
Tenzin Chan ◽  
Rachel Taubman ◽  
David Rolnick

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and them compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1−ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.


Sign in / Sign up

Export Citation Format

Share Document