access latency
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 16)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 18 (9) ◽  
pp. 249-264
Author(s):  
Yang Liang ◽  
Zhigang Hu ◽  
Xinyu Zhang ◽  
Hui Xiao
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Lincan Li ◽  
Chiew Foong Kwong ◽  
Qianyu Liu ◽  
Pushpendu Kar ◽  
Saeid Pourroostaei Ardakani

Mobile edge caching is an emerging approach to manage high mobile data traffic in fifth-generation wireless networks that reduces content access latency and offloading data traffic of backhaul links. This paper proposes a novel cooperative caching policy based on long short-term memory (LSTM) neural networks considering the characteristics between the features of the heterogeneous layers and the user moving speed. Specifically, LSTM is applied to predict content popularity. Size-weighted content popularity is utilised to balance the impact of the predicted content popularity and content size. We also consider the moving speeds of mobile users and introduce a two-level caching architecture consisting of several small base stations (SBSs) and macro base stations (MBSs). To avoid content requests of fast-moving users affecting the content popularity distribution of the SBS since fast-moving users frequently handover among SBSs, fast-moving users are served by MBSs no matter which SBS they are in. SBSs serve low-speed users, and SBSs in the same cluster can communicate with one another. The simulation results show that compared to common cache methods, for example, the least frequently used and least recently used methods, our proposed policy is at least 8.9% lower and 6.8% higher in terms of the average content access latency and offloading ratio, respectively.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 438
Author(s):  
Rongshan Wei ◽  
Chenjia Li ◽  
Chuandong Chen ◽  
Guangyu Sun ◽  
Minghua He

Special accelerator architecture has achieved great success in processor architecture, and it is trending in computer architecture development. However, as the memory access pattern of an accelerator is relatively complicated, the memory access performance is relatively poor, limiting the overall performance improvement of hardware accelerators. Moreover, memory controllers for hardware accelerators have been scarcely researched. We consider that a special accelerator memory controller is essential for improving the memory access performance. To this end, we propose a dynamic random access memory (DRAM) memory controller called NNAMC for neural network accelerators, which monitors the memory access stream of an accelerator and transfers it to the optimal address mapping scheme bank based on the memory access characteristics. NNAMC includes a stream access prediction unit (SAPU) that analyzes the type of data stream accessed by the accelerator via hardware, and designs the address mapping for different banks using a bank partitioning model (BPM). The image mapping method and hardware architecture were analyzed in a practical neural network accelerator. In the experiment, NNAMC achieved significantly lower access latency of the hardware accelerator than the competing address mapping schemes, increased the row buffer hit ratio by 13.68% on average (up to 26.17%), reduced the system access latency by 26.3% on average (up to 37.68%), and lowered the hardware cost. In addition, we also confirmed that NNAMC efficiently adapted to different network parameters.


2021 ◽  
Vol 13 (1) ◽  
pp. 16
Author(s):  
Wei Li ◽  
Peng Sun ◽  
Rui Han

Information-centric networks (ICNs) have received wide interest from researchers, and in-network caching is an important characteristic of ICN. The management and placement of contents are essential due to cache nodes’ limited cache space and the huge Internet traffic. This paper focuses on coordinating two cache metrics, namely user access latency and network resource utilization, and proposes a hybrid caching scheme called the path segmentation-based hybrid caching scheme (PSBC). We temporarily divide each data transmit path into a user-edge area and non-edge area. The user-edge area adopts a heuristic caching scheme to reduce user access latency. In contrast, the non-edge area implements caching network content migration and optimization to improve network resource utilization. The simulation results show that the proposed method positively affects both the cache hit ratio and access latency.


Author(s):  
Byungmin Ahn ◽  
Taewhan Kim

A new algorithm for extracting common kernels and convolutions to maximally eliminate the redundant operations among the convolutions in binary- and ternary-weight convolutional neural networks is presented. Precisely, we propose (1) a new algorithm of common kernel extraction to overcome the local and limited exploration of common kernel candidates by the existing method, and subsequently apply (2) a new concept of common convolution extraction to maximally eliminate the redundancy in the convolution operations. In addition, our algorithm is able to (3) tune in minimizing the number of resulting kernels for convolutions, thereby saving the total memory access latency for kernels. Experimental results on ternary-weight VGG-16 demonstrate that our convolution optimization algorithm is very effective, reducing the total number of operations for all convolutions by [Formula: see text], thereby reducing the total number of execution cycles on hardware platform by 22.4% while using [Formula: see text] fewer kernels over that of the convolution utilizing the common kernels extracted by the state-of-the-art algorithm.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1487
Author(s):  
Ayodele Periola ◽  
Akintunde Alonge ◽  
Kingsley Ogudo

Data from sensor-bearing satellites requires processing aboard terrestrial data centres that use water for cooling at the expense of high data-transfer latency. The reliance of terrestrial data centres on water increases their water footprint and limits the availability of water for other applications. Therefore, data centres with low data-transfer latency and reduced reliance on Earth’s water resources are required. This paper proposes space habitat data centres (SHDCs) with low latency data transfer and that use asteroid water to address these challenges. The paper investigates the feasibility of accessing asteroid water and the reduction in computing platform access latency. Results show that the mean asteroid water access period is 319.39 days. The use of SHDCs instead of non-space computing platforms reduces access latency and increases accessible computing resources by 11.9–33.6% and 46.7–77% on average, respectively.


2020 ◽  
Vol 19 (9) ◽  
pp. 5924-5937
Author(s):  
Jian Jiao ◽  
Liang Xu ◽  
Shaohua Wu ◽  
Ye Wang ◽  
Rongxing Lu ◽  
...  

Author(s):  
Ayodele Periola ◽  
Akintunde Alonge ◽  
Kingsley Ogudo

Data from sensor bearing satellites requires processing aboard terrestrial data centers that use water for cooling at the expense of high data transfer latency. The reliance of terrestrial data centers on water increases their water footprint and limits the availability of water for other applications. Therefore, data centers with low data transfer latency and reduced reliance on earth’s water resources are required. This paper proposes space habitat data centers (SHDCs) with low latency data transfer and that use asteroid water to address these challenges. The paper investigates the feasibility of accessing asteroid water and the reduction in computing platform access latency. Results show that the mean asteroid water access period is 319.39 days. The use of SHDCs instead of non-space computing platforms reduces access latency and increases accessible computing resources by (11.9% – 33.6%) and (46.7% – 77%) on average respectively.


Sign in / Sign up

Export Citation Format

Share Document