Fast, minimal decoding complexity, systematic (13, 8) single-error-correcting codes for on-chip DRAM applications

2001 ◽  
Vol 37 (7) ◽  
pp. 438 ◽  
Author(s):  
A. Kazéminéjad
Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 709
Author(s):  
Abhishek Das ◽  
Nur A. Touba

Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square codes. The proposed codes are also shown to achieve lower decoding latency compared to Hamming codes. Thus, the proposed codes achieve a balanced trade-off between memory overhead and decoding latency, which makes them highly suitable for on-chip cache memories which have stringent throughput and memory overhead constraints.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-25
Author(s):  
James Marshall ◽  
Robert Gifford ◽  
Gedare Bloom ◽  
Gabriel Parmer ◽  
Rahul Simha

Increased access to space has led to an increase in the usage of commodity processors in radiation environments. These processors are vulnerable to transient faults such as single event upsets that may cause bit-flips in processor components. Caches in particular are vulnerable due to their relatively large area, yet are often omitted from fault injection testing because many processors do not provide direct access to cache contents and they are often not fully modeled by simulators. The performance benefits of caches make disabling them undesirable, and the presence of error correcting codes is insufficient to correct for increasingly common multiple bit upsets. This work explores building a program’s cache profile by collecting cache usage information at an instruction granularity via commonly available on-chip debugging interfaces. The profile provides a tighter bound than cache utilization for cache vulnerability estimates (50% for several benchmarks). This can be applied to reduce the number of fault injections required to characterize behavior by at least two-thirds for the benchmarks we examine. The profile enables future work in hardware fault injection for caches that avoids the biases of existing techniques.


2006 ◽  
Vol 42 (1) ◽  
pp. 67-72 ◽  
Author(s):  
Simon Litsyn ◽  
Beniamin Mounits

1970 ◽  
Vol 16 (6) ◽  
pp. 717-719 ◽  
Author(s):  
N. Sloane ◽  
D. Whitehead

1996 ◽  
Vol 42 (4) ◽  
pp. 1261-1262 ◽  
Author(s):  
P.R.J. Ostergard ◽  
M.K. Kaikkonen

2008 ◽  
Vol 17 (05) ◽  
pp. 773-783 ◽  
Author(s):  
HEESUNG LEE ◽  
EUNTAI KIM

Error correcting codes (ECCs) are commonly used as a protection against the soft errors. Single error correcting and double error detecting (SEC–DED) codes are generally used for this purpose. Such circuits are widely used in industry in all types of memory, including caches and embedded memory. In this paper, a new genetic design for ECC is proposed to perform SEC–DED in the memory check circuit. The design is aimed at finding the implementation of ECC which consumes minimal power. We formulate the ECC design into a permutable optimization problem and employ special genetic operators appropriate for this formulation. Experiments are performed to demonstrate the performance of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document