scholarly journals Optimal Cloud Orchestration Model of Containerized Task Scheduling Strategy Using Integer Linear Programming: Case Studies of IoTcloudServe@TEIN Project

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4536
Author(s):  
Nawat Swatthong ◽  
Chaodit Aswakul

As a playground for cloud computing and IoT networking environment, IoTcloudServe@TEIN has been established in the Trans-Eurasia Information Network (TEIN). In the IoTcloudServe@TEIN platform, a cloud orchestration for conducting the flow of IoT task demands is imperative for effectively improving performance. In this paper, we propose the model of optimal containerized task scheduling in cloud orchestration that maximizes the average payoff from completing tasks within the whole cloud system with different levels of cloud hierarchies. Based on integer linear programming, the model can take into account demand requirement and resource availability in terms of storage, computation, network, and splittable task granularity. To show the insights obtainable from the proposed model, the edge-core cluster of IoTcloudServe@TEIN and its peer-to-peer federated cloud scenario with OF@TEIN+ are numerically experimented and herein reported. To evaluate the model’s performance, payoff level and task completion time are considered by comparing with a well-known round-robin scheduling algorithm. The proposed ILP model can be a guideline for the cloud orchestration in IoTcloudserve@TEIN because of the lower task completion time and the higher payoff level especially upon the large demand growth, which is the major operation range of concerns in practice. Moreover, the proposed model illustrates mathematically the significance of implementing cloud architecture with refined splittable task granularity via the light-weighted container technology that has been used as the basis for IoTcloudServe@TEIN clustering design.

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Shudong Wang ◽  
Yanqing Li ◽  
Shanchen Pang ◽  
Qinghua Lu ◽  
Shuyu Wang ◽  
...  

Task scheduling plays a critical role in the performance of the edge-cloud collaborative. Whether the task is executed in the cloud and how it is scheduled in the cloud is an important issue. On the basis of satisfying the delay, this paper will schedule tasks on edge devices or cloud and present a task scheduling algorithm for tasks that need to be transferred to the cloud based on the catastrophic genetic algorithm (CGA) to achieve global optimum. The algorithm quantifies the total task completion time and the penalty factor as a fitness function. By improving the roulette selection strategy, optimizing mutation and crossover operator, and introducing cataclysm strategy, the search scope is expanded. Furthermore, the premature problem of the evolutionary algorithm is effectively alleviated. The experimental results show that the algorithm can address the optimal local issue while significantly shortening the task completion time on the basis of satisfying tasks delays.


Author(s):  
Shinan Song ◽  
Zhiyi Fang ◽  
Shuhui Chu ◽  
Mingyu Bai

Task scheduling between edge devices and remote servers is a common application scenario in edge computing or cloud computing, also known as computational offloading. A reasonable scheduling strategy can effectively shorten task completion time, reduce energy consumption, and improve user experience. However, the traditional offline task scheduling algorithm is NP-hard, and the decision requires obtaining all the information of the task and the device (such as task computing amount, data amount, device computing resources, etc.), which is challenging to meet in practical applications. The semi-online algorithm describes the task scheduling method when the system cannot obtain all the information. In this paper, we propose an Efficient Semi-online algorithm for Multi-users task offloading (ESaM), which includes two specific implementations: ESaM-I as known server-side idle time, and ESaM-O for known task computing amount. Because ESaM-I has obtained server information, it is better than ESaM-O in performance for most of the scenarios. The experimental results show that ESaM-I and ESaM-O are superior to the well-known semi-online scheduling algorithm SPaC in task completion time. As the remote processor computing ability increases, the average makespan converges to 0.875, 0.742, 0.782 for SPaC-M, ESaM-O, and ESaM-I in the simulation.


2013 ◽  
Vol 347-350 ◽  
pp. 2426-2429 ◽  
Author(s):  
Jun Wei Ge ◽  
Yong Sheng Yuan

Use genetic algorithm for task allocation and scheduling has get more and more scholars' attention. How to reasonable use of computing resources make the total and average time of complete the task shorter and cost smaller is an important issue. The paper presents a genetic algorithm consider total task completion time, average task completion time and cost constraint. Compared with algorithm that only consider cost constraint (CGA) and adaptive algorithm that only consider total task completion time by the simulation experiment. Experimental results show that this algorithm is a more effective task scheduling algorithm in the cloud computing environment.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Fanghai Gong

In recent years, cloud workflow task scheduling has always been an important research topic in the business world. Cloud workflow task scheduling means that the workflow tasks submitted by users are allocated to appropriate computing resources for execution, and the corresponding fees are paid in real time according to the usage of resources. For most ordinary users, they are mainly concerned with the two service quality indicators of workflow task completion time and execution cost. Therefore, how cloud service providers design a scheduling algorithm to optimize task completion time and cost is a very important issue. This paper proposes research on workflow scheduling based on mobile cloud computing machine learning, and this paper conducts research by using literature research methods, experimental analysis methods, and other methods. This article has deeply studied mobile cloud computing, machine learning, task scheduling, and other related theories, and a workflow task scheduling system model was established based on mobile cloud computing machine learning from different algorithms used in processing task completion time, task service costs, task scheduling, and resource usage The situation and the influence of different tasks on the experimental results are analyzed in many aspects. The algorithm in this paper speeds up the scheduling time by about 7% under a different number of tasks and reduces the scheduling cost by about 2% compared with other algorithms. The algorithm in this paper has been obviously optimized in time scheduling and task scheduling.


2011 ◽  
Vol 267 ◽  
pp. 693-698
Author(s):  
Yi Jun Liu ◽  
Xiao Man He ◽  
Dan Feng ◽  
Yu Fang

Through the research on the existing parallel computing technologies, this paper bas an in-depth research and analysis on the status, issues to be addressed and functional features of parallel computing task scheduling, and for the current problems existed, presents a solution. The program can better reflect the heterogeneity and dynamicity of the parallel resources, as far as possible ensure the reliability and stability of the selected resources, while reducing task completion time and meeting user requirements for service quality.


2019 ◽  
Vol 8 (4) ◽  
pp. 5207-5213

Cloud computing is a prominent computing model wherein shared resources can be given as per the customer request at a time. The available resources in the cloud are gathered to execute several tasks that are submitted by the customer. While implementing the tasks, there is a need to optimize performance in terms of execution time, response time and resource utilization of the cloud. The optimization of the mentioned factors in the Cloud Computing can be achieved by one of the major areas known as Load balancing which refers to dealing with client requests from diverse application servers that are functioning in the cloud. An efficient Load Balancing algorithm enables the cloud to be more proficient and enhances customer contentment. So, this survey paper highlights the latest studies regarding the application of Load Balancing techniques for task allocation such as resource allocation (RA) strategies, cloud task scheduling centered on Load Balancing, dynamic Resource Allocation schemes, and cloud resource provisioning scheduling heuristics. Finally, Load Balancing performance for task allocation methods is compared based on task completion time.


Author(s):  
Liang Dai ◽  
Yilin Chang ◽  
Zhong Shen

Scheduling tasks in wireless sensor networks is one of the most challenging problems. Sensing tasks should be allocated and processed among sensors in minimum times, so that users can draw prompt and effective conclusions through analyzing sensed data. Furthermore, finishing sensing task faster will benefit energy saving, which is critical in system design of wireless sensor networks. But sensors may refuse to take pains to carry out the tasks due to the limited energy. To solve the potentially selfish problem of the sensors, a non-cooperative game algorithm (NGTSA) for task scheduling in wireless sensor networks is proposed. In the proposed algorithm, according to the divisible load theory, the tasks are distributed reasonably to every node from SINK based on the processing capability and communication capability. By removing the performance degradation caused by communications interference and idle, the reduced task completion time and the improved network resource utilization are achieved. Strategyproof mechanism which provide incentives to the sensors to obey the prescribed algorithms, and to truthfully report their parameters, leading to an effient task scheduling and execution. A utility function related with the total task completion time and tasks allocating scheme is designed. The Nash equilibrium of the game algorithm is proved. The simulation results show that with the mechanism in the algorithm, selfish nodes can be forced to report their true processing capability and endeavor to participate in the measurement, thereby the total time for accomplishing the task is minimized and the energy-consuming of the nodes is balanced.


Author(s):  
Ge Weiqing ◽  
Cui Yanru

Background: In order to make up for the shortcomings of the traditional algorithm, Min-Min and Max-Min algorithm are combined on the basis of the traditional genetic algorithm. Methods: In this paper, a new cloud computing task scheduling algorithm is proposed, which introduces Min-Min and Max-Min algorithm to generate initialization population, and selects task completion time and load balancing as double fitness functions, which improves the quality of initialization population, algorithm search ability and convergence speed. Results: The simulation results show that the algorithm is superior to the traditional genetic algorithm and is an effective cloud computing task scheduling algorithm. Conclusion: Finally, this paper proposes the possibility of the fusion of the two quadratively improved algorithms and completes the preliminary fusion of the algorithm, but the simulation results of the new algorithm are not ideal and need to be further studied.


Sign in / Sign up

Export Citation Format

Share Document