scholarly journals Dynamic Offloading Model for Distributed Collaboration in Edge Computing: A Use Case on Forest Fires Management

2020 ◽  
Vol 10 (7) ◽  
pp. 2334
Author(s):  
Jieun Kang ◽  
Svetlana Kim ◽  
Jaeho Kim ◽  
NakMyoung Sung ◽  
YongIk Yoon

With the development of the Internet of Things (IoT), the amount of data is growing and becoming more diverse. There are several problems when transferring data to the cloud, such as limitations on network bandwidth and latency. That has generated considerable interest in the study of edge computing, which processes and analyzes data near the network terminals where data is causing. The edge computing can extract insight data from a large number of data and provide fast essential services through simple analysis. The edge computing has a real-time advantage, but also has disadvantages, such as limited edge node capacity. The edge node for edge computing causes overload and delays in completing the task. In this paper, we proposes an efficient offloading model through collaboration between edge nodes for the prevention of overload and response to potential danger quickly in emergencies. In the proposed offloading model, the functions of edge computing are divided into data-centric and task-centric offloading. The offloading model can reduce the edge node overload based on a centralized, inefficient distribution and trade-off occurring in the edge node. That is the leading cause of edge node overload. So, this paper shows a collaborative offloading model in edge computing that guarantees real-time and prevention overload prevention based on data-centric offloading and task-centric offloading. Also, we present an intelligent offloading model based on several scenarios of forest fire ignition.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xiang Yu ◽  
Chun Shan ◽  
Jilong Bian ◽  
Xianfei Yang ◽  
Ying Chen ◽  
...  

With the rapid development of Internet of Things (IoT), massive sensor data are being generated by the sensors deployed everywhere at an unprecedented rate. As the number of Internet of Things devices is estimated to grow to 25 billion by 2021, when facing the explicit or implicit anomalies in the real-time sensor data collected from Internet of Things devices, it is necessary to develop an effective and efficient anomaly detection method for IoT devices. Recent advances in the edge computing have significant impacts on the solution of anomaly detection in IoT. In this study, an adaptive graph updating model is first presented, based on which a novel anomaly detection method for edge computing environment is then proposed. At the cloud center, the unknown patterns are classified by a deep leaning model, based on the classification results, the feature graphs are updated periodically, and the classification results are constantly transmitted to each edge node where a cache is employed to keep the newly emerging anomalies or normal patterns temporarily until the edge node receives a newly updated feature graph. Finally, a series of comparison experiments are conducted to demonstrate the effectiveness of the proposed anomaly detection method for edge computing. And the results show that the proposed method can detect the anomalies in the real-time sensor data efficiently and accurately. More than that, the proposed method performs well when there exist newly emerging patterns, no matter they are anomalous or normal.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Xingmin Ma ◽  
Shenggang Xu ◽  
Fengping An ◽  
Fuhong Lin

Owning to the high processing complexity, the image restoration can only be processed offline and hardly be applied in the real-time production life. The development of edge computing provides a new solution for real-time image restoration. It can upload the original image to the edge node to process in real time and then return results to users immediately. However, the processing capacity of the edge node is still limited which requires a lightweight image restoration algorithm. A novel real-time image restoration algorithm is proposed in edge computing. Firstly, 10 classical functions are used to determine the population size and maximum iteration times of traction fruit fly optimization algorithm (TFOA). Secondly, TFOA is used to optimize the optimal parameters of least squares support vector regression (LSSVR) kernel function, and the error function of image restoration is taken as an adaptive function of TFOA. Thirdly, the LLSVR algorithm is used to restore the image. During the image restoration process, the training process is to establish a mapping relationship between the degraded image and the adjacent pixels of the original image. The relationship is established; the degraded image can be restored by using the mapping relationship. Through the comparison and analysis of experiments, the proposed method can meet the requirements of real-time image restoration, and the proposed algorithm can speed up the image restoration and improve the image quality.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012010
Author(s):  
JianMin Zhang ◽  
YaWen Dai

Abstract An adaptive networking method based on LoRa(Long Range) technology is proposed. The data transmission of wireless sensor networks widely used in the Internet of things is studied. A star network composed of Lora wireless transmission technology is designed to build an adaptive data transmission network. The design process of network topology, hardware and adaptive networking method of adaptive networking are introduced. Aiming at solving the problems of limited node capacity, adjacent frequency interference, and unreasonable channel resource allocation in the LoRa network, an adaptive frequency hopping mechanism is used for networking. The system uses I. MX6ULL as the main control and integrates 8 LoRa modules at the same time, which greatly improves the node capacity of the gateway. The server can realize real-time viewing and monitoring of node equipment, and real-time evaluation of channel communication quality.


2021 ◽  
pp. 39-45
Author(s):  
Yabin Wang ◽  
◽  
Jing Yu

The emergence of edge computing makes up for the limited capacity of devices. By migrating intensive computing tasks from them to edge nodes (EN), we can save more energy while still maintaining the quality of service.Computing offload decision involves collaboration and complex resource management. It should be determined in real time according to dynamic workload and network environment. The simulation experiment method is used to maximize the long-term utility by deploying deep reinforcement learning agents on IOT devices and edge nodes, and the alliance learning is introduced to distribute the deep reinforcement learning agents. First, build the Internet of things system supporting edge computing, download the existing model from the edge node for training, and unload the intensive computing task to the edge node for training; upload the updated parameters to the edge node, and the edge node aggregates the parameters with the The model at the edge nodecan get a new model; the cloud can get a new model at the edge node and aggregate, and can also get updated parameters from the edge node to apply to the device.


2022 ◽  
Vol 355 ◽  
pp. 03036
Author(s):  
Wei Li ◽  
Zhiyuan Han ◽  
Jian Shen ◽  
Dandan Luo ◽  
Bo Gao ◽  
...  

Herein, on the basis of a distributed AI cluster, a real-time video analysis system is proposed for edge computing. With ARM cluster server as the hardware platform, a distributed software platform is constructed. The system is characterized by flexible expansion, flexible deployment, data security, and network bandwidth efficiency, which makes it suited to edge computing scenarios. According to the measurement data, the system is effective in increasing the speed of AI calculation by over 20 times in comparison with the embedded single board and achieving the calculation effect that matches GPU. Therefore, it is considered suited to the application in heavy computing power such as real-time AI computing.


Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 955
Author(s):  
Zhiyuan Li ◽  
Ershuai Peng

With the development of smart vehicles and various vehicular applications, Vehicular Edge Computing (VEC) paradigm has attracted from academic and industry. Compared with the cloud computing platform, VEC has several new features, such as the higher network bandwidth and the lower transmission delay. Recently, vehicular computation-intensive task offloading has become a new research field for the vehicular edge computing networks. However, dynamic network topology and the bursty computation tasks offloading, which causes to the computation load unbalancing for the VEC networking. To solve this issue, this paper proposed an optimal control-based computing task scheduling algorithm. Then, we introduce software defined networking/OpenFlow framework to build a software-defined vehicular edge networking structure. The proposed algorithm can obtain global optimum results and achieve the load-balancing by the virtue of the global load status information. Besides, the proposed algorithm has strong adaptiveness in dynamic network environments by automatic parameter tuning. Experimental results show that the proposed algorithm can effectively improve the utilization of computation resources and meet the requirements of computation and transmission delay for various vehicular tasks.


Sign in / Sign up

Export Citation Format

Share Document