workload scheduling
Recently Published Documents


TOTAL DOCUMENTS

85
(FIVE YEARS 33)

H-INDEX

13
(FIVE YEARS 3)

Author(s):  
Tao Zheng ◽  
Jian Wan ◽  
Jilin Zhang ◽  
Congfeng Jiang

AbstractEdge computing is a new paradigm for providing cloud computing capacities at the edge of network near mobile users. It offers an effective solution to help mobile devices with computation-intensive and delay-sensitive tasks. However, the edge of network presents a dynamic environment with large number of devices, high mobility of users, heterogeneous applications and intermittent traffic. In such environment, edge computing often suffers from unbalance resource allocation, which leads to task failure and affects system performance. To tackle this problem, we proposed a deep reinforcement learning(DRL)-based workload scheduling approach with the goal of balancing the workload, reducing the service time and the failed task rate. Meanwhile, We adopt Deep-Q-Network(DQN) algorithms to solve the complexity and high dimension of workload scheduling problem. Simulation results show that our proposed approach achieves the best performance in aspects of service time, virtual machine(VM) utilization, and failed tasks rate compared with other approaches. Our DRL-based approach can provide an efficient solution to the workload scheduling problem in edge computing.


2021 ◽  
Author(s):  
Stefan Nastic ◽  
Thomas Pusztai ◽  
Andrea Morichetta ◽  
Victor Casamayor Pujol ◽  
Schahram Dustdar ◽  
...  
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-12 ◽  
Author(s):  
Zhongmin Chen ◽  
Zhiwei Xu ◽  
Jianxiong Wan ◽  
Jie Tian ◽  
Limin Liu ◽  
...  

Novel smart environments, such as smart home, smart city, and intelligent transportation, are driving increasing interest in deploying deep neural networks (DNN) in edge devices. Unfortunately, deploying DNN at resource-constrained edge devices poses a huge challenge. These workloads are computationally intensive. Moreover, the edge server-based approach may be affected by incidental factors, such as network jitters and conflicts, when multiple tasks are offloaded to the same device. A rational workload scheduling for smart environments is highly desired. In this work, we propose a Conflict-resilient Incremental Offloading of Deep Neural Networks at Edge (CIODE) for improving the efficiency of DNN inference in the edge smart environment. CIODE divides the DNN model into several partitions by layer and incrementally uploads them to local edge nodes. We design a waiting lock-based scheduling paradigm to choose edge devices for DNN layers to be offloaded. In detail, an advanced lock mechanism is proposed to handle concurrency conflicts. Real-world testbed-based experiments demonstrate that, compared with other state-of-the-art baselines, CIODE outperforms the DNN inference performance of these popular baselines by 20 % to 70 % and significantly improves the robustness under the insight of neighboring collaboration.


2021 ◽  
Author(s):  
Kaveh Khorramnejad ◽  
Alagan Anpalagan ◽  
Ling Guan

Recently, many methods and algorithms have been proposed in pre-fetching area. However, pre-fetching integrated with workload scheduling approaches have not been investigated as much. Initially, this thesis reviews the principles of the existing pre-fetching strategies considering latency and cost factor as primary objectives. Later, it focuses on an integrated workload scheduling and pre-fetching model to enhance the performance of response time and minimize the cost. Furthermore, response time and cost problems are formulated and to overcome the total response time and cost problems a heuristic approach is proposed. Integrated model is tested for wide range of variables and, the effects of various parameters such as processing speed and pre-fetcher’s utilization are analysed and compared. Finally, based on the results integrated pre-fetching and workload scheduling model outperforms either of them, individually. Thus, this thesis can contribute to the the new solutions in this area.


2021 ◽  
Author(s):  
Kaveh Khorramnejad ◽  
Alagan Anpalagan ◽  
Ling Guan

Recently, many methods and algorithms have been proposed in pre-fetching area. However, pre-fetching integrated with workload scheduling approaches have not been investigated as much. Initially, this thesis reviews the principles of the existing pre-fetching strategies considering latency and cost factor as primary objectives. Later, it focuses on an integrated workload scheduling and pre-fetching model to enhance the performance of response time and minimize the cost. Furthermore, response time and cost problems are formulated and to overcome the total response time and cost problems a heuristic approach is proposed. Integrated model is tested for wide range of variables and, the effects of various parameters such as processing speed and pre-fetcher’s utilization are analysed and compared. Finally, based on the results integrated pre-fetching and workload scheduling model outperforms either of them, individually. Thus, this thesis can contribute to the the new solutions in this area.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2879
Author(s):  
Marcel Antal ◽  
Andrei-Alexandru Cristea ◽  
Victor-Alexandru Pădurean ◽  
Tudor Cioara ◽  
Ionut Anghel ◽  
...  

Data centers consume lots of energy to execute their computational workload and generate heat that is mostly wasted. In this paper, we address this problem by considering heat reuse in the case of a distributed data center that features IT equipment (i.e., servers) installed in residential homes to be used as a primary source of heat. We propose a workload scheduling solution for distributed data centers based on a constraint satisfaction model to optimally allocate workload on servers to reach and maintain the desired home temperature setpoint by reusing residual heat. We have defined two models to correlate the heat demand with the amount of workload to be executed by the servers: a mathematical model derived from thermodynamic laws calibrated with monitored data and a machine learning model able to predict the amount of workload to be executed by a server to reach a desired ambient temperature setpoint. The proposed solution was validated using the monitored data of an operational distributed data center. The server heat and power demand mathematical model achieve a correlation accuracy of 11.98% while in the case of machine learning models, the best correlation accuracy of 4.74% is obtained for a Gradient Boosting Regressor algorithm. Also, our solution manages to distribute the workload so that the temperature setpoint is met in a reasonable time, while the server power demand is accurately following the heat demand.


Sign in / Sign up

Export Citation Format

Share Document