scholarly journals A Single Error Correcting Code with One-Step Group Partitioned Decoding Based on Shared Majority-Vote

Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 709
Author(s):  
Abhishek Das ◽  
Nur A. Touba

Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square codes. The proposed codes are also shown to achieve lower decoding latency compared to Hamming codes. Thus, the proposed codes achieve a balanced trade-off between memory overhead and decoding latency, which makes them highly suitable for on-chip cache memories which have stringent throughput and memory overhead constraints.

2008 ◽  
Vol 17 (05) ◽  
pp. 773-783 ◽  
Author(s):  
HEESUNG LEE ◽  
EUNTAI KIM

Error correcting codes (ECCs) are commonly used as a protection against the soft errors. Single error correcting and double error detecting (SEC–DED) codes are generally used for this purpose. Such circuits are widely used in industry in all types of memory, including caches and embedded memory. In this paper, a new genetic design for ECC is proposed to perform SEC–DED in the memory check circuit. The design is aimed at finding the implementation of ECC which consumes minimal power. We formulate the ECC design into a permutable optimization problem and employ special genetic operators appropriate for this formulation. Experiments are performed to demonstrate the performance of the proposed method.


Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 789
Author(s):  
Emanuele Bellini ◽  
Chiara Marcolla ◽  
Nadir Murru

In addition to their usefulness in proving one’s identity electronically, identification protocols based on zero-knowledge proofs allow designing secure cryptographic signature schemes by means of the Fiat–Shamir transform or other similar constructs. This approach has been followed by many cryptographers during the NIST (National Institute of Standards and Technology) standardization process for quantum-resistant signature schemes. NIST candidates include solutions in different settings, such as lattices and multivariate and multiparty computation. While error-correcting codes may also be used, they do not provide very practical parameters, with a few exceptions. In this manuscript, we explored the possibility of using the error-correcting codes proposed by Stakhov in 2006 to design an identification protocol based on zero-knowledge proofs. We showed that this type of code offers a valid alternative in the error-correcting code setting to build such protocols and, consequently, quantum-resistant signature schemes.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-25
Author(s):  
James Marshall ◽  
Robert Gifford ◽  
Gedare Bloom ◽  
Gabriel Parmer ◽  
Rahul Simha

Increased access to space has led to an increase in the usage of commodity processors in radiation environments. These processors are vulnerable to transient faults such as single event upsets that may cause bit-flips in processor components. Caches in particular are vulnerable due to their relatively large area, yet are often omitted from fault injection testing because many processors do not provide direct access to cache contents and they are often not fully modeled by simulators. The performance benefits of caches make disabling them undesirable, and the presence of error correcting codes is insufficient to correct for increasingly common multiple bit upsets. This work explores building a program’s cache profile by collecting cache usage information at an instruction granularity via commonly available on-chip debugging interfaces. The profile provides a tighter bound than cache utilization for cache vulnerability estimates (50% for several benchmarks). This can be applied to reduce the number of fault injections required to characterize behavior by at least two-thirds for the benchmarks we examine. The profile enables future work in hardware fault injection for caches that avoids the biases of existing techniques.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Aayush Bhat ◽  
Vyom Gupta ◽  
Savitoj Singh Aulakh ◽  
Renold S. Elsen

Purpose The purpose of this paper is to implement the generative design as an optimization technique to achieve a reasonable trade-off between weight and reliability for the control arm plate of a double-wishbone suspension assembly of a Formula Student race car. Design/methodology/approach The generative design methodology is applied to develop a low-weight design alternative to a standard control arm plate design. A static stress simulation and a fatigue life study are developed to assess the response of the plate against the loading criteria and to ensure that the plate sustains the theoretically determined number of loading cycles. Findings The approach implemented provides a justifiable outcome for a weight-factor of safety trade-off. In addition to optimal material distribution, the generative design methodology provides several design outcomes, for different materials and fabrication techniques. This enables the selection of the best possible outcome for several structural requirements. Research limitations/implications This technique can be used for applications with pre-defined constraints, such as packaging and loading, usually observed in load-bearing components developed in the automotive and aerospace sectors of the manufacturing industry. Practical implications Using this technique can provide an alternative design solution to long periods spent in the design phase, because of its ability to generate several possible outcomes in just a fraction of time. Originality/value The proposed research provides a means of developing optimized designs and provides techniques in which the design developed and chosen can be structurally analyzed.


Author(s):  
Levon Arsalanyan ◽  
Hayk Danoyan

The Nearest Neighbor search algorithm considered in this paper is well known (Elias algorithm). It uses error-correcting codes and constructs appropriate hash-coding schemas. These schemas preprocess the data in the form of lists. Each list is contained in some sphere, centered at a code-word. The algorithm is considered for the cases of perfect codes, so the spheres and, consequently, the lists do not intersect. As such codes exist for the limited set of parameters, the algorithm is considered for some other generalizations of perfect codes, and then the same data point may be contained in different lists. A formula of time complexity of the algorithm is obtained for these cases, using coset weight structures of the mentioned codes


Author(s):  
Rohitkumar R Upadhyay

Abstract: Hamming codes for all intents and purposes are the first nontrivial family of error-correcting codes that can actually correct one error in a block of binary symbols, which literally is fairly significant. In this paper we definitely extend the notion of error correction to error-reduction and particularly present particularly several decoding methods with the particularly goal of improving the error-reducing capabilities of Hamming codes, which is quite significant. First, the error-reducing properties of Hamming codes with pretty standard decoding definitely are demonstrated and explored. We show a sort of lower bound on the definitely average number of errors present in a decoded message when two errors for the most part are introduced by the channel for for all intents and purposes general Hamming codes, which actually is quite significant. Other decoding algorithms are investigated experimentally, and it generally is definitely found that these algorithms for the most part improve the error reduction capabilities of Hamming codes beyond the aforementioned lower bound of for all intents and purposes standard decoding. Keywords: coding theory, hamming codes, hamming distance


1998 ◽  
Vol 57 (3) ◽  
pp. 367-376 ◽  
Author(s):  
Chi-Kwong Li ◽  
Ingrid Nelson

We characterise all the perfect k-error correcting codes that can be defined on the graph associated with the Towers of Hanoi puzzle. In particular, a short proof for the existence of 1-error correcting code on such a graph is given.


Author(s):  
Sameh Monir El-Sayegh ◽  
Rana Al-Haj

Purpose The purpose of this paper is to propose a new framework for time–cost trade-off. The new framework provides the optimum time–cost value taking into account the float loss impact. Design/methodology/approach The stochastic framework uses Monte Carlo Simulation to calculate the effect of float loss on risk. This is later translated into an added cost to the trade-off problem. Five examples, from literature, are solved using the proposed framework to test the applicability of the developed framework. Findings The results confirmed the research hypothesis that the new optimum solution will be at a higher duration and cost but at a lower risk compared to traditional methods. The probabilities of finishing the project on time using the developed framework in all five cases were better than those using the classical deterministic optimization technique. Originality/value The objective of time–cost trade-off is to determine the optimum project duration corresponding to the minimum total cost. Time–cost trade-off techniques result in reducing the available float for noncritical activities and thus increasing the schedule risks. Existing deterministic optimization technique does not consider the impact of the float loss within the noncritical activities when the project duration is being crashed. The new framework allows project managers to exercise new trade-offs between time, cost and risk which will ultimately improve the chances of achieving project objectives.


Sign in / Sign up

Export Citation Format

Share Document