scholarly journals Reducing main memory access latency through SDRAM address mapping techniques and access reordering mechanisms

2006 ◽  
Author(s):  
Jun Shao
Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 438
Author(s):  
Rongshan Wei ◽  
Chenjia Li ◽  
Chuandong Chen ◽  
Guangyu Sun ◽  
Minghua He

Special accelerator architecture has achieved great success in processor architecture, and it is trending in computer architecture development. However, as the memory access pattern of an accelerator is relatively complicated, the memory access performance is relatively poor, limiting the overall performance improvement of hardware accelerators. Moreover, memory controllers for hardware accelerators have been scarcely researched. We consider that a special accelerator memory controller is essential for improving the memory access performance. To this end, we propose a dynamic random access memory (DRAM) memory controller called NNAMC for neural network accelerators, which monitors the memory access stream of an accelerator and transfers it to the optimal address mapping scheme bank based on the memory access characteristics. NNAMC includes a stream access prediction unit (SAPU) that analyzes the type of data stream accessed by the accelerator via hardware, and designs the address mapping for different banks using a bank partitioning model (BPM). The image mapping method and hardware architecture were analyzed in a practical neural network accelerator. In the experiment, NNAMC achieved significantly lower access latency of the hardware accelerator than the competing address mapping schemes, increased the row buffer hit ratio by 13.68% on average (up to 26.17%), reduced the system access latency by 26.3% on average (up to 37.68%), and lowered the hardware cost. In addition, we also confirmed that NNAMC efficiently adapted to different network parameters.


2013 ◽  
Vol 41 (3) ◽  
pp. 380-391 ◽  
Author(s):  
Young Hoon Son ◽  
O. Seongil ◽  
Yuhwan Ro ◽  
Jae W. Lee ◽  
Jung Ho Ahn
Keyword(s):  

2013 ◽  
Vol 427-429 ◽  
pp. 2531-2535 ◽  
Author(s):  
Feng Dong Sun ◽  
Quan Guo ◽  
Lan Wang

The bottleneck is not the disk I/O but CUP clock speed faster than the memory speed in main memory database .In order to achieve high performance in main memory database ,it is a good approach to design new index structures to improve the memory access speed .This chapter presents a T-tree index structure and its algorithms in main memory database firstly .Then presents two results on Optimization of T-tree index ,including T-tail tree and TTB-tree. Our results indicate that the T-Tree provides good overall performance in main memory.


Author(s):  
Anshita Garg

This is a research-based project and the basic point motivating this project is learning and implementing algorithms that reduce time and space complexity. In the first part of the project, we reduce the time taken to search a given record by using a B/B+ tree rather than indexing and traditional sequential access. It is concluded that disk-access times are much slower than main memory access times. Typical seek times and rotational delays are of the order of 5 to 6 milliseconds and typical data transfer rates are of the range of 5 to 10 million bytes per second and therefore, main memory access times are likely to be at least 4 or 5 orders of magnitude faster than disk access on any given system. Therefore, the objective is to minimize the number of disk accesses, and thus, this project is concerned with techniques for achieving that objective i.e. techniques for arranging the data on a disk so that any required piece of data, say some specific record, can be located in a few I/O’s as possible. In the second part of the project, Dynamic Programming problems were solved with Recursion, Recursion With Storage, Iteration with Storage, Iteration with Smaller Storage. The problems which have been solved in these 4 variations are Fibonacci, Count Maze Path, Count Board Path, and Longest Common Subsequence. All 4 variations are an improvement over one another and thus time and space complexity are reduced significantly as we go from Recursion to Iteration with Smaller Storage.


2016 ◽  
Vol 4 (1) ◽  
pp. 1-4
Author(s):  
Aman Agarwal ◽  
Arjun J Anil ◽  
Rahul Nair ◽  
K. Sivasankaran

Direct Memory Access is a method of transferring data between peripherals and memory without using the CPU. It is designed to improve system performance by allowing external devices to directly transfer information from the system memory. We generally use asynchronous type of DMA as they respond directly to input. The DMA controller issues signals to the peripheral device and main memory to execute read and write commands. In this paper DMA controller was designed using Verilog HDL and simulated in Cadence NC Launch. The design was synthesized using low power constraints. Through this design we have decreased the power consumption to 69%.


Sign in / Sign up

Export Citation Format

Share Document