scholarly journals Hierarchical Edge Computing: A Novel Multi-Source Multi-Dimensional Data Anomaly Detection Scheme for Industrial Internet of Things

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 111257-111270
Author(s):  
Yuhuai Peng ◽  
Aiping Tan ◽  
Jingjing Wu ◽  
Yuanguo Bi
2020 ◽  
Vol 28 (1) ◽  
pp. 331-346
Author(s):  
Dequan KONG ◽  
Desheng LIU ◽  
Lei ZHANG ◽  
Lili HE ◽  
Qingwu SHI ◽  
...  

2021 ◽  
Vol 17 (7) ◽  
pp. 5010-5011
Author(s):  
Zhaolong Ning ◽  
Edith Ngai ◽  
Ricky Y. K. Kwok ◽  
Mohammad S. Obaidat

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 101539-101549 ◽  
Author(s):  
Hao Wu ◽  
Hui Tian ◽  
Gaofeng Nie ◽  
Pengtao Zhao

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 74217-74230 ◽  
Author(s):  
Bela Genge ◽  
Piroska Haller ◽  
Calin Enachescu

2021 ◽  
Author(s):  
Xiaoyu Hao ◽  
Ruohai Zhao ◽  
Tao Yang ◽  
Yulin Hu ◽  
Bo Hu ◽  
...  

Abstract Edge computing has become one of the key enablers for ultra-reliable and low-latency communications in the industrial Internet of Things in the fifth generation communication systems, and is also a promising technology in the future sixth generation communication systems. In this work, we consider the application of edge computing to smart factories for mission-critical task offloading through wireless links. In such scenarios, although high end-to-end delays from the generation to completion of tasks happen with low probability, they may incur severe casualties and property loss, and should be seriously treated. Inspired by the risk management theory widely used in finance, we adopt the Conditional Value at Risk to capture the tail of the delay distribution. An upper bound of the Conditional Value at Risk is derived through analysis of the queues both at the devices and the edge computing servers. We aim to find out the optimal offloading policy taking into consideration both the average and the worst case delay performance of the system. Given that the formulated optimization problem is a non-convex mixed integer non-linear programming problem, a decomposition into sub-problems is performed and a two-stage heuristic algorithm is proposed. Simulation results validate our analysis and indicate that the proposed algorithm can reduce the risk in both the queueing and end-to-end delay.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Zhenzhong Zhang ◽  
Wei Sun ◽  
Yanliang Yu

With the vigorous development of the Internet of Things, the Internet, cloud computing, and mobile terminals, edge computing has emerged as a new type of Internet of Things technology, which is one of the important components of the Industrial Internet of Things. In the face of large-scale data processing and calculations, traditional cloud computing is facing tremendous pressure, and the demand for new low-latency computing technologies is imminent. As a supplementary expansion of cloud computing technology, mobile edge computing will sink the computing power from the previous cloud to a network edge node. Through the mutual cooperation between computing nodes, the number of nodes that can be calculated is more, the types are more comprehensive, and the computing range is even greater. Broadly, it makes up for the shortcomings of cloud computing technology. Although edge computing technology has many advantages and has certain research and application results, how to allocate a large number of computing tasks and computing resources to computing nodes and how to schedule computing tasks at edge nodes are still challenges for edge computing. In view of the problems encountered by edge computing technology in resource allocation and task scheduling, this paper designs a dynamic task scheduling strategy for edge computing with delay-aware characteristics, which realizes the reasonable utilization of computing resources and is required for edge computing systems. This paper proposes a resource allocation scheme combined with the simulated annealing algorithm, which minimizes the overall performance loss of the system while keeping the system low delay. Finally, it is verified through experiments that the task scheduling and resource allocation methods proposed in this paper can significantly reduce the response delay of the application.


Sign in / Sign up

Export Citation Format

Share Document