Community-based Placement of Registries to Speed up Application Deployment on Edge Computing

Author(s):  
Luis Augusto Dias Knob ◽  
Francescomaria Faticanti ◽  
Tiago Ferreto ◽  
Domenico Siracusa
Author(s):  
Bo Li ◽  
Qiang He ◽  
Guangming Cui ◽  
Xiaoyu Xia ◽  
Feifei Chen ◽  
...  

Author(s):  
Nguyen Ngoc Tho

CBT rooted some decades ago in the West and has currently been on the rapid rise thanks to the promotion of internet-based informatics. CBT tourists has shaped several driving forces and motivation to speed up the promotion and completion of CBT worldwide. Along with the boom of high-tech and its incredible pressures on human mind, post-modernism (PM) has been shaped due to the strong demand of liberalizing in human beings' mind and diversifying their lifestyles under the mutual interaction between ecological and cultural resources. PM starts firstly in arts and literature, gradually influences on business and tourism; hence impacts on CBT. The reconciliation of CBT and PM gives birth to discerning CBT advanced by discerning travelers who definitly care on their selfparticipation, self-experience, self-discovery during their journeys as well as the request for cooperation, co-controlling and co-responsibility of all the tourists, state agents and the local communities during the services. The discerning CBT travelers partially promote the commeners' awareness and engagement in advancing standard of life and civilizing of their lifestyle. Discerning CBT is surely not to replace popular CBT as a whole but to modify the diversity of modern tourism as it meets a concrete part of the various demands of tourists and pay more important role in standardizing human life.


2020 ◽  
Vol 26 (3) ◽  
pp. 42-53
Author(s):  
Vuk Vranjkovic ◽  
Rastislav Struharik

In this paper, a hardware accelerator for sparse support vector machines (SVM) is proposed. We believe that the proposed accelerator is the first accelerator of this kind. The accelerator is designed for use in field programmable gate arrays (FPGA) systems. Additionally, a novel algorithm for the pruning of SVM models is developed. The pruned SVM model has a smaller memory footprint and can be processed faster compared to dense SVM models. In the systems with memory throughput, compute or power constraints, such as edge computing, this can be a big advantage. The experiments on several standard datasets are conducted, which aim is to compare the efficiency of the proposed architecture and the developed algorithm to the existing solutions. The results of the experiments reveal that the proposed hardware architecture and SVM pruning algorithm has superior characteristics in comparison to the previous work in the field. A memory reduction from 3 % to 85 % is achieved, with a speed-up in a range from 1.17 to 7.92.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-21
Author(s):  
Cosmin Avasalcai ◽  
Christos Tsigkanos ◽  
Schahram Dustdar

Edge computing offers the possibility of deploying applications at the edge of the network. To take advantage of available devices’ distributed resources, applications often are structured as microservices, often having stringent requirements of low latency and high availability. However, a decentralized edge system that the application may be intended for is characterized by high volatility, due to devices making up the system being unreliable or leaving the network unexpectedly. This makes application deployment and assurance that it will continue to operate under volatility challenging. We propose an adaptive framework capable of deploying and efficiently maintaining a microservice-based application at runtime, by tackling two intertwined problems: (i) finding a microservice placement across device hosts and (ii) deriving invocation paths that serve it. Our objective is to maintain correct functionality by satisfying given requirements in terms of end-to-end latency and availability, in a volatile edge environment. We evaluate our solution quantitatively by considering performance and failure recovery.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6335
Author(s):  
José M. Cecilia ◽  
Juan-Carlos Cano ◽  
Juan Morales-García ◽  
Antonio Llanes ◽  
Baldomero Imbernón

Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as “dark data”, i.e., data generated but never analyzed. The efficient analysis of this data is mandatory to create intelligent applications for the next generation of IoT applications that benefits society. Artificial Intelligence (AI) techniques are very well suited to identifying hidden patterns and correlations in this data deluge. In particular, clustering algorithms are of the utmost importance for performing exploratory data analysis to identify a set (a.k.a., cluster) of similar objects. Clustering algorithms are computationally heavy workloads and require to be executed on high-performance computing clusters, especially to deal with large datasets. This execution on HPC infrastructures is an energy hungry procedure with additional issues, such as high-latency communications or privacy. Edge computing is a paradigm to enable light-weight computations at the edge of the network that has been proposed recently to solve these issues. In this paper, we provide an in-depth analysis of emergent edge computing architectures that include low-power Graphics Processing Units (GPUs) to speed-up these workloads. Our analysis includes performance and power consumption figures of the latest Nvidia’s AGX Xavier to compare the energy-performance ratio of these low-cost platforms with a high-performance cloud-based counterpart version. Three different clustering algorithms (i.e., k-means, Fuzzy Minimals (FM), and Fuzzy C-Means (FCM)) are designed to be optimally executed on edge and cloud platforms, showing a speed-up factor of up to 11× for the GPU code compared to sequential counterpart versions in the edge platforms and energy savings of up to 150% between the edge computing and HPC platforms.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xianwei Li ◽  
Baoliu Ye

With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.


Information ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 308
Author(s):  
Juncal Alonso ◽  
Leire Orue-Echevarria ◽  
Eneko Osaba ◽  
Jesús López Lobo ◽  
Iñigo Martinez ◽  
...  

The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6050
Author(s):  
Tarek Belabed ◽  
Vitor Ramos Gomes da Silva ◽  
Alexandre Quenon ◽  
Carlos Valderamma ◽  
Chokri Souani

Deep Neural Networks (DNNs) deployment for IoT Edge applications requires strong skills in hardware and software. In this paper, a novel design framework fully automated for Edge applications is proposed to perform such a deployment on System-on-Chips. Based on a high-level Python interface that mimics the leading Deep Learning software frameworks, it offers an easy way to implement a hardware-accelerated DNN on an FPGA. To do this, our design methodology covers the three main phases: (a) customization: where the user specifies the optimizations needed on each DNN layer, (b) generation: the framework generates on the Cloud the necessary binaries for both FPGA and software parts, and (c) deployment: the SoC on the Edge receives the resulting files serving to program the FPGA and related Python libraries for user applications. Among the study cases, an optimized DNN for the MNIST database can speed up more than 60× a software version on the ZYNQ 7020 SoC and still consume less than 0.43W. A comparison with the state-of-the-art frameworks demonstrates that our methodology offers the best trade-off between throughput, power consumption, and system cost.


Sign in / Sign up

Export Citation Format

Share Document