memory request
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 4)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Severine P. Parois ◽  
Susan D. Eicher ◽  
Stephen R. Lindemann ◽  
Jeremy N. Marchant

AbstractThe influence of feed supplements on behavior and memory has been recently studied in livestock. The objectives of the study were to evaluate the effects of a synbiotic on: an episodic-like (SOR: Spontaneous Object Recognition), a working (BARR: Fence barrier task), a long-term (TMAZE: Spatial T-maze task) memory test and on gut microbiota composition. Eighteen female piglets were supplemented from 1 to 28 days of age with a synbiotic (SYN), while 17 served as control (CTL). Feces were collected on days 16, 33 and 41 for 16S rRNA gene composition analyses. In the SOR, SYN piglets interacted more quickly with the novel object than CTL piglets. In the BARR, SYN piglets had shorter distances to finish the test in trial 3. In the TMAZE, SYN piglets were quicker to succeed on specific days and tended to try the new rewarded arm earlier during the reversal stage. Difference of microbiota composition between treatments was nonexistent on D16, a tendency on D33 and significant on D41. The synbiotic supplement may confer memory advantages in different cognitive tasks, regardless of the nature of the reward and the memory request. Difference in memory abilities can potentially be explained by differences in microbiota composition.


2020 ◽  
Vol 76 (4) ◽  
pp. 3129-3154
Author(s):  
Juan Fang ◽  
Mengxuan Wang ◽  
Zelin Wei

AbstractMultiple CPUs and GPUs are integrated on the same chip to share memory, and access requests between cores are interfering with each other. Memory requests from the GPU seriously interfere with the CPU memory access performance. Requests between multiple CPUs are intertwined when accessing memory, and its performance is greatly affected. The difference in access latency between GPU cores increases the average latency of memory accesses. In order to solve the problems encountered in the shared memory of heterogeneous multi-core systems, we propose a step-by-step memory scheduling strategy, which improve the system performance. The step-by-step memory scheduling strategy first creates a new memory request queue based on the request source and isolates the CPU requests from the GPU requests when the memory controller receives the memory request, thereby preventing the GPU request from interfering with the CPU request. Then, for the CPU request queue, a dynamic bank partitioning strategy is implemented, which dynamically maps it to different bank sets according to different memory characteristics of the application, and eliminates memory request interference of multiple CPU applications without affecting bank-level parallelism. Finally, for the GPU request queue, the criticality is introduced to measure the difference of the memory access latency between the cores. Based on the first ready-first come first served strategy, we implemented criticality-aware memory scheduling to balance the locality and criticality of application access.


Author(s):  
Roberto Cavicchioli ◽  
Nicola Capodieci ◽  
Marco Solieri ◽  
Marko Bertogna ◽  
Paolo Valente ◽  
...  
Keyword(s):  

2018 ◽  
Vol 27 (5) ◽  
pp. 985-994 ◽  
Author(s):  
Jun Zhang ◽  
Yanxiang He ◽  
Fanfan Shen ◽  
Qing'an Li ◽  
Hai Tan
Keyword(s):  

2015 ◽  
Vol 2015 ◽  
pp. 1-10
Author(s):  
Jianliang Ma ◽  
Jinglei Meng ◽  
Tianzhou Chen ◽  
Minghui Wu

Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly.


2011 ◽  
Vol 2011 ◽  
pp. 1-21 ◽  
Author(s):  
Field Cady ◽  
Yi Zhuang ◽  
Mor Harchol-Balter

We provide a stochastic analysis of hard disk performance, including a closed form solution for the average access time of a memory request. The model we use covers a wide range of types and applications of disks, and in particular it captures modern innovations like zone bit recording. The derivation is based on an analytical technique we call “shuffling”, which greatly simplifies the analysis relative to previous work and provides a simple, easy-to-use formula for the average access time. Our analysis can predict performance of single disks for a wide range of disk types and workloads. Furthermore, it can predict the performance benefits of several optimizations, including short stroking and mirroring, which are common in disk arrays.


Sign in / Sign up

Export Citation Format

Share Document