Computing and storage models for edge computing

Keyword(s):  
Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Kai Peng ◽  
Victor C. M. Leung ◽  
Xiaolong Xu ◽  
Lixin Zheng ◽  
Jiabin Wang ◽  
...  

Mobile cloud computing (MCC) integrates cloud computing (CC) into mobile networks, prolonging the battery life of the mobile users (MUs). However, this mode may cause significant execution delay. To address the delay issue, a new mode known as mobile edge computing (MEC) has been proposed. MEC provides computing and storage service for the edge of network, which enables MUs to execute applications efficiently and meet the delay requirements. In this paper, we present a comprehensive survey of the MEC research from the perspective of service adoption and provision. We first describe the overview of MEC, including the definition, architecture, and service of MEC. After that we review the existing MUs-oriented service adoption of MEC, i.e., offloading. More specifically, the study on offloading is divided into two key taxonomies: computation offloading and data offloading. In addition, each of them is further divided into single MU offloading scheme and multi-MU offloading scheme. Then we survey edge server- (ES-) oriented service provision, including technical indicators, ES placement, and resource allocation. In addition, other issues like applications on MEC and open issues are investigated. Finally, we conclude the paper.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3231 ◽  
Author(s):  
Jiuyun Xu ◽  
Zhuangyuan Hao ◽  
Xiaoting Sun

Mobile edge computing (MEC) has become more popular both in academia and industry. Currently, with the help of edge servers and cloud servers, it is one of the substantial technologies to overcome the latency between cloud server and wireless device, computation capability and storage shortage of wireless devices. In mobile edge computing, wireless devices take responsibility with input data. At the same time, edge servers and cloud servers take charge of computation and storage. However, until now, how to balance the power consumption of edge devices and time delay has not been well addressed in mobile edge computing. In this paper, we focus on strategies of the task offloading decision and the influence analysis of offloading decisions on different environments. Firstly, we propose a system model considering both energy consumption and time delay and formulate it into an optimization problem. Then, we employ two algorithms—Enumerating and Branch-and-Bound—to get the optimal or near-optimal decision for minimizing the system cost including the time delay and energy consumption. Furthermore, we compare the performance between two algorithms and draw the conclusion that the comprehensive performance of Branch-and-Bound algorithm is better than that of the other. Finally, we analyse the influence factors of optimal offloading decisions and the minimum cost in detail by changing key parameters.


2020 ◽  
Author(s):  
Junaid Nawaz Syed ◽  
Shree Krishna Sharma ◽  
Mohmammad N. Patwary ◽  
Md Asaduzzaman

<div>The upcoming beyond 5G (B5G)/6G wireless networks target various innovative technologies, services, and interfaces such as edge computing, ultra-reliable and low-latency communication (URLLC), backscatter communications, and TeraHertz (THz) technology-enabled inter-chip communications and high capacity links. Although there are ongoing advances in the system/network level, it is crucial to introduce innovations at the device-level to efficiently support these novel technologies by addressing various practical constraints in terms of power limitations, computational capacity, and storage capacity. This device-level innovation ultimately demands significant enhancements in today's consumer electronics (CE). Considering the contemporary latency requirements of CE (e.g., entertainment, gaming, etc), to enhance the commercial potential of ``edge processing as service'', it is envisioned that URLLC will further evolve as enhanced-URLLC (e-URLLC). In this regard, this paper proposes a novel edge computing-enabled e-URLLC framework for the next generation CE, named edge computing for CE (ECCE), in order to enable the support of e-URLLC in the upcoming 6G era. Starting with the discussion on recent trends and advances in CE, the proposed framework and its importance in the 6G wireless era are described. Subsequently, several potential technologies and tools to enable the implementation of the proposed ECCE framework are identified along with some interesting open research topics and future recommendations. </div>


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1303 ◽  
Author(s):  
Zachary Lamb ◽  
Dharma Agrawal

Vehicular ad-hoc Networks (VANETs) are an integral part of intelligent transportation systems (ITS) that facilitate communications between vehicles and the internet. More recently, VANET communications research has strayed from the antiquated DSRC standard and favored more modern cellular technologies, such as fifth generation (5G). The ability of cellular networks to serve highly mobile devices combined with the drastically increased capacity of 5G, would enable VANETs to accommodate large numbers of vehicles and support range of applications. The addition of thousands of new connected devices not only stresses the cellular networks, but also the computational and storage requirements supporting the applications and software of these devices. Autonomous vehicles, with numerous on-board sensors, are expected to generate large amounts of data that must be transmitted and processed. Realistically, on-board computing and storage resources of the vehicle cannot be expected to handle all data that will be generated over the vehicles lifetime. Cloud computing will be an essential technology in VANETs and will support the majority of computation and long-term data storage. However, the networking overhead and latency associated with remote cloud resources could prove detrimental to overall network performance. Edge computing seeks to reduce the overhead by placing computational resources nearer to the end users of the network. The geographical diversity and varied hardware configurations of resource in a edge-enabled network would require careful management to ensure efficient resource utilization. In this paper, we introduce an architecture which evaluates available resources in real-time and makes allocations to the most logical and feasible resource. We evaluate our approach mathematically with the use of a multi-criteria decision analysis algorithm and validate our results with experiments using a test-bed of cloud resources. Results demonstrate that an algorithmic ranking of physical resources matches very closely with experimental results and provides a means of delegating tasks to the best available resource.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2191
Author(s):  
Dimitrios Dechouniotis ◽  
Nikolaos Athanasopoulos ◽  
Aris Leivadeas ◽  
Nathalie Mitton ◽  
Raphael Jungers ◽  
...  

The potential offered by the abundance of sensors, actuators, and communications in the Internet of Things (IoT) era is hindered by the limited computational capacity of local nodes. Several key challenges should be addressed to optimally and jointly exploit the network, computing, and storage resources, guaranteeing at the same time feasibility for time-critical and mission-critical tasks. We propose the DRUID-NET framework to take upon these challenges by dynamically distributing resources when the demand is rapidly varying. It includes analytic dynamical modeling of the resources, offered workload, and networking environment, incorporating phenomena typically met in wireless communications and mobile edge computing, together with new estimators of time-varying profiles. Building on this framework, we aim to develop novel resource allocation mechanisms that explicitly include service differentiation and context-awareness, being capable of guaranteeing well-defined Quality of Service (QoS) metrics. DRUID-NET goes beyond the state of the art in the design of control algorithms by incorporating resource allocation mechanisms to the decision strategy itself. To achieve these breakthroughs, we combine tools from Automata and Graph theory, Machine Learning, Modern Control Theory, and Network Theory. DRUID-NET constitutes the first truly holistic, multidisciplinary approach that extends recent, albeit fragmented results from all aforementioned fields, thus bridging the gap between efforts of different communities.


Author(s):  
Bao Yi Qin ◽  
Zheng Hao ◽  
Zhao Qiang

In cloud computing, since the program runs in cloud, it can be written in programming language and maintained only in the cloud after compilation. Due to the heterogeneous nature of the edge node platform, many tasks are migrated from the cloud to the edge terminal. It is not easy to realize the programming under the edge computing, and the maintenance cost is also high. At the same time, because the programmable is a high-risk activity, it has high security requirements. In order to solve this problem, this paper designs a programmable and blockchain security scheme based on the edge computing firework model, realizes the programming of the internet of things (IoT) gateway firework node under the edge computing, and appreciates the safe transmission and storage of programmable data through the blockchain system. The experimental results show that this scheme not only facilitates the user's programming, enhances the real-time performance, and saves the data transmission cost, but also ensures the security and reliability of the system.


2020 ◽  
pp. 1-16
Author(s):  
Sarra Mehamel ◽  
Samia Bouzefrane ◽  
Soumya Banarjee ◽  
Mehammed Daoui ◽  
Valentina E. Balas

Caching contents at the edge of mobile networks is an efficient mechanism that can alleviate the backhaul links load and reduce the transmission delay. For this purpose, choosing an adequate caching strategy becomes an important issue. Recently, the tremendous growth of Mobile Edge Computing (MEC) empowers the edge network nodes with more computation capabilities and storage capabilities, allowing the execution of resource-intensive tasks within the mobile network edges such as running artificial intelligence (AI) algorithms. Exploiting users context information intelligently makes it possible to design an intelligent context-aware mobile edge caching. To maximize the caching performance, the suitable methodology is to consider both context awareness and intelligence so that the caching strategy is aware of the environment while caching the appropriate content by making the right decision. Inspired by the success of reinforcement learning (RL) that uses agents to deal with decision making problems, we present a modified reinforcement learning (mRL) to cache contents in the network edges. Our proposed solution aims to maximize the cache hit rate and requires a multi awareness of the influencing factors on cache performance. The modified RL differs from other RL algorithms in the learning rate that uses the method of stochastic gradient decent (SGD) beside taking advantage of learning using the optimal caching decision obtained from fuzzy rules.


2020 ◽  
Author(s):  
Junaid Nawaz Syed ◽  
Shree Krishna Sharma ◽  
Mohmammad N. Patwary ◽  
Md Asaduzzaman

<div>The upcoming beyond 5G (B5G)/6G wireless networks target various innovative technologies, services, and interfaces such as edge computing, ultra-reliable and low-latency communication (URLLC), backscatter communications, and TeraHertz (THz) technology-enabled inter-chip communications and high capacity links. Although there are ongoing advances in the system/network level, it is crucial to introduce innovations at the device-level to efficiently support these novel technologies by addressing various practical constraints in terms of power limitations, computational capacity, and storage capacity. This device-level innovation ultimately demands significant enhancements in today's consumer electronics (CE). Considering the contemporary latency requirements of CE (e.g., entertainment, gaming, etc), to enhance the commercial potential of ``edge processing as service'', it is envisioned that URLLC will further evolve as enhanced-URLLC (e-URLLC). In this regard, this paper proposes a novel edge computing-enabled e-URLLC framework for the next generation CE, named edge computing for CE (ECCE), in order to enable the support of e-URLLC in the upcoming 6G era. Starting with the discussion on recent trends and advances in CE, the proposed framework and its importance in the 6G wireless era are described. Subsequently, several potential technologies and tools to enable the implementation of the proposed ECCE framework are identified along with some interesting open research topics and future recommendations. </div>


Sign in / Sign up

Export Citation Format

Share Document