scholarly journals Concurrent Average Memory Access Time

Computer ◽  
2014 ◽  
Vol 47 (5) ◽  
pp. 74-80 ◽  
Keyword(s):  
Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1454
Author(s):  
Yoshihiro Sugiura ◽  
Toru Tanzawa

This paper describes how one can reduce the memory access time with pre-emphasis (PE) pulses even in non-volatile random-access memory. Optimum PE pulse widths and resultant minimum word-line (WL) delay times are investigated as a function of column address. The impact of the process variation in the time constant of WL, the cell current, and the resistance of deciding path on optimum PE pulses are discussed. Optimum PE pulse widths and resultant minimum WL delay times are modeled with fitting curves as a function of column address of the accessed memory cell, which provides designers with the ability to set the optimum timing for WL and BL (bit-line) operations, reducing average memory access time.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 176
Author(s):  
Wei Zhu ◽  
Xiaoyang Zeng

Applications have different preferences for caches, sometimes even within the different running phases. Caches with fixed parameters may compromise the performance of a system. To solve this problem, we propose a real-time adaptive reconfigurable cache based on the decision tree algorithm, which can optimize the average memory access time of cache without modifying the cache coherent protocol. By monitoring the application running state, the cache associativity is periodically tuned to the optimal cache associativity, which is determined by the decision tree model. This paper implements the proposed decision tree-based adaptive reconfigurable cache in the GEM5 simulator and designs the key modules using Verilog HDL. The simulation results show that the proposed decision tree-based adaptive reconfigurable cache reduces the average memory access time compared with other adaptive algorithms.


Author(s):  
Amanat Ali ◽  
Amir Athar Khan ◽  
Sanawer Alam ◽  
N. R. Kidwai

Reduced memory set partitioned embedded block (SPECK) image coder is proposed in this paper large run–time memory is required by the original SPECK algorithm due to use of linked lists, so it is unsuitable for memory constrained portable devices. In the proposed algorithm SPECK coder successfully replace the linked lists by fixed length state tables or markers, to keep track of set partitions and encoded information. The proposed algorithm neither uses any linked list nor any state table, therefore completely eliminating the use of any dynamic memory at the time of its execution. This also reduces the memory access time, thereby making it faster than the original SPECK, while retaining the embedded property and having coding efficiency close to original SPECK.


2005 ◽  
Vol 14 (03) ◽  
pp. 605-617 ◽  
Author(s):  
SUNG WOO CHUNG ◽  
HYONG-SHIK KIM ◽  
CHU SHIK JHON

In scalable CC-NUMA multiprocessors, it is crucial to reduce the average memory access time. For applications where the second-level (L2) cache is large enough, we propose a split L2 cache to utilize the surplus space. The split L2 cache is composed of a traditional LRU cache and an RVC (Remote Victim Cache) which only stores the data of remote memory address range. Thus, it reduces the average L2 cache miss time by keeping remote blocks that would be discarded otherwise. Though the split cache does not reduce the miss rates, it is observed to reduce the total execution time effectively by up to 27%.It even outperform an LRU cache of double size.


Sign in / Sign up

Export Citation Format

Share Document