spatial locality
Recently Published Documents


TOTAL DOCUMENTS

132
(FIVE YEARS 28)

H-INDEX

15
(FIVE YEARS 3)

2022 ◽  
Vol 21 (1) ◽  
pp. 1-22
Author(s):  
Dongsuk Shin ◽  
Hakbeom Jang ◽  
Kiseok Oh ◽  
Jae W. Lee

A long battery life is a first-class design objective for mobile devices, and main memory accounts for a major portion of total energy consumption. Moreover, the energy consumption from memory is expected to increase further with ever-growing demands for bandwidth and capacity. A hybrid memory system with both DRAM and PCM can be an attractive solution to provide additional capacity and reduce standby energy. Although providing much greater density than DRAM, PCM has longer access latency and limited write endurance to make it challenging to architect it for main memory. To address this challenge, this article introduces CAMP, a novel DRAM c ache a rchitecture for m obile platforms with P CM-based main memory. A DRAM cache in this environment is required to filter most of the writes to PCM to increase its lifetime, and deliver highest efficiency even for a relatively small-sized DRAM cache that mobile platforms can afford. To address this CAMP divides DRAM space into two regions: a page cache for exploiting spatial locality in a bandwidth-efficient manner and a dirty block buffer for maximally filtering writes. CAMP improves the performance and energy-delay-product by 29.2% and 45.2%, respectively, over the baseline PCM-oblivious DRAM cache, while increasing PCM lifetime by 2.7×. And CAMP also improves the performance and energy-delay-product by 29.3% and 41.5%, respectively, over the state-of-the-art design with dirty block buffer, while increasing PCM lifetime by 2.5×.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lu Wang ◽  
Nan Xu ◽  
Jiangdian Song

Abstract Background Current intra-tumoral heterogeneous feature extraction in radiology is limited to the use of a single slice or the region of interest within a few context-associated slices, and the decoding of intra-tumoral spatial heterogeneity using whole tumor samples is rare. We aim to propose a mathematical model of space-filling curve-based spatial correspondence mapping to interpret intra-tumoral spatial locality and heterogeneity. Methods A Hilbert curve-based approach was employed to decode and visualize intra-tumoral spatial heterogeneity by expanding the tumor volume to a two-dimensional (2D) matrix in voxels while preserving the spatial locality of the neighboring voxels. The proposed method was validated using three-dimensional (3D) volumes constructed from lung nodules from the LIDC-IDRI dataset, regular axial plane images, and 3D blocks. Results Dimensionality reduction of the Hilbert volume with a single regular axial plane image showed a sparse and scattered pixel distribution on the corresponding 2D matrix. However, for 3D blocks and lung tumor inside the volume, the dimensionality reduction to the 2D matrix indicated regular and concentrated squares and rectangles. For classification into benign and malignant masses using lung nodules from the LIDC-IDRI dataset, the Inception-V4 indicated that the Hilbert matrix images improved accuracy (85.54% vs. 73.22%, p < 0.001) compared to the original CT images of the test dataset. Conclusions Our study indicates that Hilbert curve-based spatial correspondence mapping is promising for decoding intra-tumoral spatial heterogeneity of partial or whole tumor samples on radiological images. This spatial-locality-preserving approach for voxel expansion enables existing radiomics and convolution neural networks to filter structured and spatially correlated high-dimensional intra-tumoral heterogeneity.


2021 ◽  
Vol 11 (14) ◽  
pp. 6623
Author(s):  
Chi-Hsiu Su ◽  
Chin-Hsien Wu

Compared with the traditional hard-disk drives (HDDs), solid-state drives (SSDs) have adopted NAND flash memory and become the current popular storage devices. However, when the free space in NAND flash memory is not enough, the garbage collection will be triggered to recycle the free space. The activities of the garbage collection include a large amount of data written and time-consuming erase operations that can reduce the performance of NAND flash memory. Therefore, DRAM is usually added to NAND flash memory as cache to store frequently used data. The typical cache methods mainly utilize the data characteristics of temporal locality and spatial locality to keep the frequently used data in the cache as much as possible. In addition, we find that there are not only temporal/spatial locality, but also certain associations between the accessed data. Therefore, we suggest that a cache policy should not only consider the temporal/spatial locality but also consider the association relationship between the accessed data to improve the cache hit ratio. In the paper, we will propose a cache policy based on request association analysis for reliable NAND-based storage systems. According to the experimental results, the cache hit ratio of the proposed method can be increased significantly when compared with the typical cache methods.


Author(s):  
Jing Yan ◽  
Yujuan Tan ◽  
Zhulin Ma ◽  
Jingcheng Liu ◽  
Xianzhang Chen ◽  
...  

Translation lookaside buffer (TLB) is critical to modern multi-level memory systems’ performance. However, due to the limited size of the TLB itself, its address coverage is limited. Adopting a two-level exclusive TLB hierarchy can increase the coverage [M. Swanson, L. Stoller and J. Carter, Increasing TLB reach using superpages backed by shadow memory, 25th Annual Int. Symp. Computer Architecture (1998); H.P. Chang, T. Heo, J. Jeong and J. Huh Hybrid TLB coalescing: Improving TLB translation coverage under diverse fragmented memory allocations, ACM SIGARCH Comput. Arch. News 45 (2017) 444–456] to improve memory performance. However, after analyzing the existing two-level exclusive TLBs, we find that a large number of “dead” entries (they will have no further use) exist in the last-level TLB (LLT) for a long time, which occupy much cache space and result in low TLB hit-rate. Based on this observation, we first propose exploiting temporal and spatial locality to predict and identify dead entries in the exclusive LLT and remove them as soon as possible to leave room for more valid data to increase the TLB hit rates. Extensive experiments show that our method increases the average hit rate by 8.67%, to a maximum of 19.95%, and reduces total latency by an average of 9.82%, up to 24.41%.


2021 ◽  
Vol 13 (9) ◽  
pp. 1661
Author(s):  
Rasha S. Gargees ◽  
Grant J. Scott

In the era of big data, where massive amounts of remotely sensed imagery can be obtained from various satellites accompanied by the rapid change in the surface of the Earth, new techniques for large-scale change detection are necessary to facilitate timely and effective human understanding of natural and human-made phenomena. In this research, we propose a chip-based change detection method that is enabled by using deep neural networks to extract visual features. These features are transformed into deep orthogonal visual features that are then clustered based on land cover characteristics. The resulting chip cluster memberships allow arbitrary level-of-detail change analysis that can also support irregular geospatial extent based agglomerations. The proposed methods naturally support cross-resolution temporal scenes without requiring normalization of the pixel resolution across scenes and without requiring pixel-level coregistration processes. This is achieved with configurable spatial locality comparisons between years, where the aperture of a unit of measure can be a single chip, a small neighborhood of chips, or a large irregular geospatial region. The performance of our proposed method has been validated using various quantitative and statistical metrics in addition to presenting the visual geo-maps and the percentage of the change. The results show that our proposed method efficiently detected the change from a large scale area.


2021 ◽  
Vol 103 (15) ◽  
Author(s):  
Minjae Kim ◽  
Hu Miao ◽  
Sangkook Choi ◽  
Manuel Zingl ◽  
Antoine Georges ◽  
...  

2021 ◽  
Vol 18 (3) ◽  
pp. 1-23
Author(s):  
Wim Heirman ◽  
Stijn Eyerman ◽  
Kristof Du Bois ◽  
Ibrahim Hur

Sparse memory accesses, which are scattered accesses to single elements of a large data structure, are a challenge for current processor architectures. Their lack of spatial and temporal locality and their irregularity makes caches and traditional stream prefetchers useless. Furthermore, performing standard caching and prefetching on sparse accesses wastes precious memory bandwidth and thrashes caches, deteriorating performance for regular accesses. Bypassing prefetchers and caches for sparse accesses, and fetching only a single element (e.g., 8 B) from main memory (subline access), can solve these issues. Deciding which accesses to handle as sparse accesses and which as regular cached accesses, is a challenging task, with a large potential impact on performance. Not only is performance reduced by treating sparse accesses as regular accesses, not caching accesses that do have locality also negatively impacts performance by significantly increasing their latency and bandwidth consumption. Furthermore, this decision depends on the dynamic environment, such as input set characteristics and system load, making a static decision by the programmer or compiler suboptimal. We propose the Instruction Spatial Locality Estimator ( ISLE ), a hardware detector that finds instructions that access isolated words in a sea of unused data. These sparse accesses are dynamically converted into uncached subline accesses, while keeping regular accesses cached. ISLE does not require modifying source code or binaries, and adapts automatically to a changing environment (input data, available bandwidth, etc.). We apply ISLE to a graph analytics processor running sparse graph workloads, and show that ISLE outperforms the performance of no subline accesses, manual sublining, and prior work on detecting sparse accesses.


2021 ◽  
Author(s):  
Chun-Hsiang Tang ◽  
Christina W. Tsai

&lt;p&gt;Abstract&lt;/p&gt;&lt;p&gt;Most of the time series in nature are nonlinear and nonstationary affected by climate change particularly. It is inevitable that Taiwan has also experienced frequent drought events in recent years. However, drought events are natural disasters with no clear warnings and their influences are cumulative. The difficulty of detecting and analyzing the drought phenomenon remains. To deal with the above-mentioned problem, Multi-dimensional Ensemble Empirical Mode Decomposition (MEEMD) is introduced to analyze the temperature and rainfall data from 1975~2018 in this study, which is a powerful method developed for the time-frequency analysis of nonlinear, nonstationary time series. This method can not only analyze the spatial locality and temporal locality of signals but also decompose the multiple-dimensional time series into several Intrinsic Mode Functions (IMFs). By the set of IMFs, the meaningful instantaneous frequency and the trend of the signals can be observed. Considering stochastic and deterministic influences, to enhance the accuracy this study also reconstruct IMFs into two components, stochastic and deterministic, by the coefficient of auto-correlation.&lt;/p&gt;&lt;p&gt;In this study, the influences of temperature and precipitation on the drought events will be discussed. Furthermore, to decrease the significant impact of drought events, this study also attempts to forecast the occurrences of drought events in the short-term via the Artificial Neural Network technique. And, based on the CMIP5 model, this study also investigates the trend and variability of drought events and warming in different climatic scenarios.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;Keywords: Multi-dimensional Ensemble Empirical Mode Decomposition (MEEMD), Intrinsic Mode Function(IMF), Drought&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document