Task offloading decision in fog computing system

2017 ◽  
Vol 14 (11) ◽  
pp. 59-68 ◽  
Author(s):  
Qiliang Zhu ◽  
Baojiang Si ◽  
Feifan Yang ◽  
You Ma
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Qiong Wu ◽  
Hongmei Ge ◽  
Qiang Fan ◽  
Wei Yin ◽  
Bo Chang ◽  
...  

Various emerging vehicular applications such as autonomous driving and safety early warning are used to improve the traffic safety and ensure passenger comfort. The completion of these applications necessitates significant computational resources to perform enormous latency-sensitive/nonlatency-sensitive and computation-intensive tasks. It is hard for vehicles to satisfy the computation requirements of these applications due to the limit computational capability of the on-board computer. To solve the problem, many works have proposed some efficient task offloading schemes in computing paradigms such as mobile fog computing (MFC) for the vehicular network. In the MFC, vehicles adopt the IEEE 802.11p protocol to transmit tasks. According to the IEEE 802.11p, tasks can be divided into high priority and low priority according to the delay requirements. However, no existing task offloading work takes into account the different priorities of tasks transmitted by different access categories (ACs) of IEEE 802.11p. In this paper, we propose an efficient task offloading strategy to maximize the long-term expected system reward in terms of reducing the executing time of tasks. Specifically, we jointly consider the impact of priorities of tasks transmitted by different ACs, mobility of vehicles, and the arrival/departure of computing tasks, and then transform the offloading problem into a semi-Markov decision process (SMDP) model. Afterwards, we adopt the relative value iterative algorithm to solve the SMDP model to find the optimal task offloading strategy. Finally, we evaluate the performance of the proposed scheme by extensive experiments. Numerical results indicate that the proposed offloading strategy performs well compared to the greedy algorithm.


Author(s):  
А.Н. ВОЛКОВ

Одним из направлений развития сетей связи 5G и сетей связи 2030 является интегрирование в сеть распределенных вычислительных структур, таких как системы пограничных и туманных вычислений (Fog), которые призваны выполнить децентрализацию вычислительной части сетей. В связи с этим необходимо исследовать и определить принципы предоставления услуг на основе распределенной вычислительной инфраструктуры, в том числе в условиях ограниченности ресурсов отдельно взятых составных частей (Fog-устройства). Предлагается новый фреймворк распределенной динамической вычислительной системы туманных вычислений на основе микросервисного архитектурного подхода к реализации, развертыванию и миграции программного обеспечения предоставляемых услуг. Исследуется типовая архитектура микросервисного подхода и ее имплементация в туманные вычисления, а также рассматриваются два алгоритма: алгоритм K-средних для нахождения центра пользовательской нагрузки и алгоритм роевой оптимизации для определения устройства тумана с необходимыми характеристиками для последующей миграции микросервиса. One of the directions of 5G and 2030 communications networks development is the network-integrated distributed structures, such as edge computing (MEC) and Fog computing, which are designed to decentralize the computing part of networks. In this regard, it is necessary to investigate and determine the principles of providing services based on a distributed computing infrastructure, including in conditions of limited resources of individual components (Fog devices). This article proposes a new framework for a distributed dynamic computing system of fog computing based on a microservice architectural approach to the implementation, deployment, and software migration of the services. The article examines the typical architecture of the microservice approach and its implementation in fog computing, and also investigates two algorithms: K-means for finding the center of user load, swarm optimization (PSO) to determine the fog device with the necessary characteristics for the subsequent migration of the microservice.


2021 ◽  
Author(s):  
Do Bao Son ◽  
Vu Tri An ◽  
Trinh Thu Hai ◽  
Binh Minh Nguyen ◽  
Nguyen Phi Le ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 149011-149024
Author(s):  
Zihan Gao ◽  
Wanming Hao ◽  
Zhuo Han ◽  
Shouyi Yang

Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1287 ◽  
Author(s):  
Kuntao Cui ◽  
Bin Lin ◽  
Wenli Sun ◽  
Wenqiang Sun

In recent years, unmanned surface vehicles (USVs) have made important advances in civil, maritime, and military applications. With the continuous improvement of autonomy, the increasing complexity of tasks, and the emergence of various types of advanced sensors, higher requirements are imposed on the computing performance of USV clusters, especially for latency sensitive tasks. However, during the execution of marine operations, due to the relative movement of the USV cluster nodes and the network topology of the cluster, the wireless channel states are changing rapidly, and the computing resources of cluster nodes may be available or unavailable at any time. It is difficult to accurately predict in advance. Therefore, we propose an optimized offloading mechanism based on the classic multi-armed bandit (MAB) theory. This mechanism enables USV cluster nodes to dynamically make offloading decisions by learning the potential computing performance of their neighboring team nodes to minimize average computation task offloading delay. It is an optimized algorithm named Adaptive Upper Confidence Boundary (AUCB) algorithm, and corresponding simulations are designed to evaluate the performance. The algorithm enables the USV cluster to effectively adapt to the marine vehicular fog computing networks, balancing the trade-off between exploration and exploitation (EE). The simulation results show that the proposed algorithm can quickly converge to the optimal computation task offloading combination strategy under heavy and light input data loads.


2020 ◽  
Vol 7 (1) ◽  
pp. 773-785 ◽  
Author(s):  
Qiong Wu ◽  
Hanxu Liu ◽  
Ruhai Wang ◽  
Pingyi Fan ◽  
Qiang Fan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document