low latency
Recently Published Documents


TOTAL DOCUMENTS

3746
(FIVE YEARS 1476)

H-INDEX

60
(FIVE YEARS 22)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-21
Author(s):  
Cosmin Avasalcai ◽  
Christos Tsigkanos ◽  
Schahram Dustdar

Edge computing offers the possibility of deploying applications at the edge of the network. To take advantage of available devices’ distributed resources, applications often are structured as microservices, often having stringent requirements of low latency and high availability. However, a decentralized edge system that the application may be intended for is characterized by high volatility, due to devices making up the system being unreliable or leaving the network unexpectedly. This makes application deployment and assurance that it will continue to operate under volatility challenging. We propose an adaptive framework capable of deploying and efficiently maintaining a microservice-based application at runtime, by tackling two intertwined problems: (i) finding a microservice placement across device hosts and (ii) deriving invocation paths that serve it. Our objective is to maintain correct functionality by satisfying given requirements in terms of end-to-end latency and availability, in a volatile edge environment. We evaluate our solution quantitatively by considering performance and failure recovery.


2022 ◽  
Vol 15 (3) ◽  
pp. 1-32
Author(s):  
Naif Tarafdar ◽  
Giuseppe Di Guglielmo ◽  
Philip C. Harris ◽  
Jeffrey D. Krupa ◽  
Vladimir Loncar ◽  
...  

  AIgean , pronounced like the sea, is an open framework to build and deploy machine learning (ML) algorithms on a heterogeneous cluster of devices (CPUs and FPGAs). We leverage two open source projects: Galapagos , for multi-FPGA deployment, and hls4ml , for generating ML kernels synthesizable using Vivado HLS. AIgean provides a full end-to-end multi-FPGA/CPU implementation of a neural network. The user supplies a high-level neural network description, and our tool flow is responsible for the synthesizing of the individual layers, partitioning layers across different nodes, as well as the bridging and routing required for these layers to communicate. If the user is an expert in a particular domain and would like to tinker with the implementation details of the neural network, we define a flexible implementation stack for ML that includes the layers of Algorithms, Cluster Deployment & Communication, and Hardware. This allows the user to modify specific layers of abstraction without having to worry about components outside of their area of expertise, highlighting the modularity of AIgean . We demonstrate the effectiveness of AIgean with two use cases: an autoencoder, and ResNet-50 running across 10 and 12 FPGAs. AIgean leverages the FPGA’s strength in low-latency computing, as our implementations target batch-1 implementations.


2022 ◽  
Vol 19 (1) ◽  
pp. 1-21
Author(s):  
Daeyeal Lee ◽  
Bill Lin ◽  
Chung-Kuan Cheng

SMART NoCs achieve ultra-low latency by enabling single-cycle multiple-hop transmission via bypass channels. However, contention along bypass channels can seriously degrade the performance of SMART NoCs by breaking the bypass paths. Therefore, contention-free task mapping and scheduling are essential for optimal system performance. In this article, we propose an SMT (Satisfiability Modulo Theories)-based framework to find optimal contention-free task mappings with minimum application schedule lengths on 2D/3D SMART NoCs with mixed dimension-order routing. On top of SMT’s fast reasoning capability for conditional constraints, we develop efficient search-space reduction techniques to achieve practical scalability. Experiments demonstrate that our SMT framework achieves 10× higher scalability than ILP (Integer Linear Programming) with 931.1× (ranges from 2.2× to 1532.1×) and 1237.1× (ranges from 4× to 4373.8×) faster average runtimes for finding optimum solutions on 2D and 3D SMART NoCs and our 2D and 3D extensions of the SMT framework with mixed dimension-order routing also maintain the improved scalability with the extended and diversified routing paths, resulting in reduced application schedule lengths throughout various application benchmarks.


2022 ◽  
Vol 71 (2) ◽  
pp. 436-449
Author(s):  
Bo Zhang ◽  
Zeming Cheng ◽  
Massoud Pedram

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
ZuoXun Hou

Aiming at the problems of low success rate, delay, and high communication cost in distance English teaching resource sharing, this paper puts forward a method of distance English teaching resource sharing based on Internet O2O mode. Based on the model of distance English teaching resource sharing, this paper designs four processes: query, reply, resource substitution, and resource sharing optimization. Experimental results show that the proposed method can achieve high success rate of resource sharing, low latency, communication cost, and high transmission efficiency. Therefore, it is an effective method.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 146
Author(s):  
Issa Elfergani ◽  
Abubakar Sadiq Hussaini ◽  
Jonathan Rodriguez ◽  
Raed A. Abd-Alhameed

Fifth-generation will support significantly faster mobile broadband speeds, low latency, and reliable communications, as well as enabling the full potential of the Internet of Things (IoT) [...]


2022 ◽  
pp. 687-739
Author(s):  
Jing Xu ◽  
Yanan Lin ◽  
Bin Liang ◽  
Jia Shen
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document