scholarly journals Modeling of a Generic Edge Computing Application Design

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7276
Author(s):  
Pedro Juan Roig ◽  
Salvador Alcaraz ◽  
Katja Gilly ◽  
Cristina Bernad ◽  
Carlos Juiz

Edge computing applications leverage advances in edge computing along with the latest trends of convolutional neural networks in order to achieve ultra-low latency, high-speed processing, low-power consumptions scenarios, which are necessary for deploying real-time Internet of Things deployments efficiently. As the importance of such scenarios is growing by the day, we propose to undertake two different kind of models, such as an algebraic models, with a process algebra called ACP and a coding model with a modeling language called Promela. Both approaches have been used to build models considering an edge infrastructure with a cloud backup, which has been further extended with the addition of extra fog nodes, and after having applied the proper verification techniques, they have all been duly verified. Specifically, a generic edge computing design has been specified in an algebraic manner with ACP, being followed by its corresponding algebraic verification, whereas it has also been specified by means of Promela code, which has been verified by means of the model checker Spin.

Author(s):  
David R. Selviah ◽  
Janti Shawash

This chapter celebrates 50 years of first and higher order neural network (HONN) implementations in terms of the physical layout and structure of electronic hardware, which offers high speed, low latency, compact, low cost, low power, mass produced systems. Low latency is essential for practical applications in real time control for which software implementations running on CPUs are too slow. The literature review chapter traces the chronological development of electronic neural networks (ENN) discussing selected papers in detail from analog electronic hardware, through probabilistic RAM, generalizing RAM, custom silicon Very Large Scale Integrated (VLSI) circuit, Neuromorphic chips, pulse stream interconnected neurons to Application Specific Integrated circuits (ASICs) and Zero Instruction Set Chips (ZISCs). Reconfigurable Field Programmable Gate Arrays (FPGAs) are given particular attention as the most recent generation incorporate Digital Signal Processing (DSP) units to provide full System on Chip (SoC) capability offering the possibility of real-time, on-line and on-chip learning.


2019 ◽  
Vol 15 (7) ◽  
pp. 155014771986155 ◽  
Author(s):  
Shaoyong Guo ◽  
Xing Hu ◽  
Gangsong Dong ◽  
Wencui Li ◽  
Xuesong Qiu

Mobile edge computing has attracted great interests in the popularity of fifth-generation (5G) networks and Internet of Things. It aims to supply low-latency and high-interaction services for delay-sensitive applications. Utilizing mobile edge computing with Smart Home, which is one of the most important fields of Internet of Things, is a method to satisfy users’ demand for higher computing power and storage capacity. However, due to limited computing resource, how to improve efficiency of resource allocation is a challenge. In this article, we propose a hierarchical architecture in Smart Home with mobile edge computing, providing low-latency services and promoting edge process for smart devices. Based on that, a Stackelberg Game is designed in order to allocate computing resource to devices efficiently. Then, one-to-many matching is established to handle resource allocation problems. It is proved that the allocation strategy can optimize the utility of mobile edge computing server and improve allocating efficiency. Simulation results show the effectiveness of the proposed strategy compared with schemes based on auction game, and present performance with different changing system parameters.


2018 ◽  
Vol 56 (5) ◽  
pp. 39-45 ◽  
Author(s):  
Ke Zhang ◽  
Supeng Leng ◽  
Yejun He ◽  
Sabita Maharjan ◽  
Yan Zhang

Author(s):  
Karan Bajaj ◽  
Bhisham Sharma ◽  
Raman Singh

AbstractThe Internet of Things (IoT) applications and services are increasingly becoming a part of daily life; from smart homes to smart cities, industry, agriculture, it is penetrating practically in every domain. Data collected over the IoT applications, mostly through the sensors connected over the devices, and with the increasing demand, it is not possible to process all the data on the devices itself. The data collected by the device sensors are in vast amount and require high-speed computation and processing, which demand advanced resources. Various applications and services that are crucial require meeting multiple performance parameters like time-sensitivity and energy efficiency, computation offloading framework comes into play to meet these performance parameters and extreme computation requirements. Computation or data offloading tasks to nearby devices or the fog or cloud structure can aid in achieving the resource requirements of IoT applications. In this paper, the role of context or situation to perform the offloading is studied and drawn to a conclusion, that to meet the performance requirements of IoT enabled services, context-based offloading can play a crucial role. Some of the existing frameworks EMCO, MobiCOP-IoT, Autonomic Management Framework, CSOS, Fog Computing Framework, based on their novelty and optimum performance are taken for implementation analysis and compared with the MAUI, AnyRun Computing (ARC), AutoScaler, Edge computing and Context-Sensitive Model for Offloading System (CoSMOS) frameworks. Based on the study of drawn results and limitations of the existing frameworks, future directions under offloading scenarios are discussed.


2021 ◽  
Vol 17 (7) ◽  
pp. 5010-5011
Author(s):  
Zhaolong Ning ◽  
Edith Ngai ◽  
Ricky Y. K. Kwok ◽  
Mohammad S. Obaidat

Sign in / Sign up

Export Citation Format

Share Document