Real-Time Task Scheduling in Smart Factories Employing Fog Computing

Author(s):  
Ming-Tuo Zhou ◽  
Tian-Feng Ren ◽  
Zhi-Ming Dai ◽  
Xin-Yu Feng
2022 ◽  
pp. 245-261
Author(s):  
Geetha J. J. ◽  
Jaya Lakshmi D. S. ◽  
Keerthana Ningaraju L. N.

Distributed caching is one such system used by dynamic high-traffic websites to process the incoming user requests to perform the required tasks in an efficient way. Distributed caching is currently employing hashing algorithm in order to serve its purpose. A significant drawback of hashing in this circumstance is the addition of new servers that would result in a change in the previous hashing method (rehashing), hence, goes into a rigmarole. Thus, we need an effective algorithm to address the problem. This technique has served as a solution for distributed and rehashing problems. Most of upcoming internet of things will have to be latency aware and will not afford the data transmission and computation time in the cloud servers. The real-time processing in proximal distance device would be much needed. Hence, the authors aim to employ a real-time task scheduling algorithm. Computations referring to the user requests that are to be handled by the servers can be efficiently handled by consistent hashing algorithms.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Shounak Chakraborty ◽  
Sangeet Saha ◽  
Magnus Själander ◽  
Klaus Mcdonald-Maier

Achieving high result-accuracy in approximate computing (AC) based real-time applications without violating power constraints of the underlying hardware is a challenging problem. Execution of such AC real-time tasks can be divided into the execution of the mandatory part to obtain a result of acceptable quality, followed by a partial/complete execution of the optional part to improve accuracy of the initially obtained result within the given time-limit. However, enhancing result-accuracy at the cost of increased execution length might lead to deadline violations with higher energy usage. We propose Prepare , a novel hybrid offline-online approximate real-time task-scheduling approach, that first schedules AC-based tasks and determines operational processing speeds for each individual task constrained by system-wide power limit, deadline, and task-dependency. At runtime, by employing fine-grained DVFS, the energy-adaptive processing speed governing mechanism of Prepare reduces processing speed during each last level cache miss induced stall and scales up the processing speed once the stall finishes to a higher value than the predetermined one. To ensure on-chip thermal safety, this higher processing speed is maintained only for a short time-span after each stall, however, this reduces execution times of the individual task and generates slacks. Prepare exploits the slacks either to enhance result-accuracy of the tasks, or to improve thermal and energy efficiency of the underlying hardware, or both. With a 70 - 80% workload, Prepare offers 75% result-accuracy with its constrained scheduling, which is enhanced by 5.3% for our benchmark based evaluation of the online energy-adaptive mechanism on a 4-core based homogeneous chip multi-processor, while meeting the deadline constraint. Overall, while maintaining runtime thermal safety, Prepare reduces peak temperature by up to 8.6 °C for our baseline system. Our empirical evaluation shows that constrained scheduling of Prepare outperforms a state-of-the-art scheduling policy, whereas our runtime energy-adaptive mechanism surpasses two current DVFS based thermal management techniques.


2020 ◽  
Vol 13 (3) ◽  
pp. 261-282
Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

PurposeThe intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.Design/methodology/approachTo realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.FindingsExperimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.Originality/valueThe proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1023 ◽  
Author(s):  
Juan Wang ◽  
Di Li

Fog computing provides computation, storage and network services for smart manufacturing. However, in a smart factory, the task requests, terminal devices and fog nodes have very strong heterogeneity, such as the different task characteristics of terminal equipment: fault detection tasks have high real-time demands; production scheduling tasks require a large amount of calculation; inventory management tasks require a vast amount of storage space, and so on. In addition, the fog nodes have different processing abilities, such that strong fog nodes with considerable computing resources can help terminal equipment to complete the complex task processing, such as manufacturing inspection, fault detection, state analysis of devices, and so on. In this setting, a new problem has appeared, that is, determining how to perform task scheduling among the different fog nodes to minimize the delay and energy consumption as well as improve the smart manufacturing performance metrics, such as production efficiency, product quality and equipment utilization rate. Therefore, this paper studies the task scheduling strategy in the fog computing scenario. A task scheduling strategy based on a hybrid heuristic (HH) algorithm is proposed that mainly solves the problem of terminal devices with limited computing resources and high energy consumption and makes the scheme feasible for real-time and efficient processing tasks of terminal devices. Finally, the experimental results show that the proposed strategy achieves superior performance compared to other strategies.


Sign in / Sign up

Export Citation Format

Share Document