Machine Learning Approach to Task Scheduling in Cloud Computing

2020 ◽  
Vol 17 (9) ◽  
pp. 4213-4218
Author(s):  
H. S. Madhusudhan ◽  
T. Satish Kumar ◽  
G. Mahesh

Cloud computing provides on demand service on internet using network of remote servers. The pivotal role for any cloud environment would be to schedule tasks and the virtual machine scheduling have key role in maintaining Quality of Service (QOS) and Service Level Agreement (SLA). Task scheduling is the process of scheduling task (user requests) to certain resources and it is an NP-complete problem. The primary objectives of scheduling algorithms are to minimize makespan and improve resource utilization. In this research work an attempt is made to implement Artificial Neural Network (ANN), which is a methodology in machine learning technique and it is applied to implement task scheduling. It is observed that neural network trained with genetic algorithm will outperforms default genetic algorithm by an average efficiency of 25.56%.

Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2020 ◽  
Vol 178 ◽  
pp. 375-385
Author(s):  
Ismail Zahraddeen Yakubu ◽  
Zainab Aliyu Musa ◽  
Lele Muhammed ◽  
Badamasi Ja’afaru ◽  
Fatima Shittu ◽  
...  

2021 ◽  
Vol 16 ◽  
pp. 668-685
Author(s):  
Shankargoud Patil ◽  
Kappargaon S. Prabhushetty

In today's environment, video surveillance is critical. When artificial intelligence, machine learning, and deep learning were introduced into the system, the technology had progressed much too far. Different methods are in place using the above combinations to help distinguish various wary activities from the live tracking of footages. Human behavior is the most unpredictable, and determining whether it is suspicious or normal is quite tough. In a theoretical setting, a deep learning approach is utilized to detect suspicious or normal behavior and sends an alarm to the nearby people if suspicious activity is predicted. In this paper, data fusion technique is used for feature extraction which gives an accurate outcome. Moreover, the classes are classified by the well effective machine learning approach of modified deep neural network (M-DNN), that predicts the classes very well. The proposed method gains 95% accuracy, as well the advanced system is contrast with previous methods like artificial neural network (ANN), random forest (RF) and support vector machine (SVM). This approach is well fitted for dynamic and static conditions.


When a Physical Machine gets a job from user, it intends to complete it at any cost. Virtual Machine (VM) helps to attain maximum completion ratio. The Host to VM ratio increases with the increase in the workload over the system. The allocation policy of VM has ambiguities with leads to an overloaded Physical Machine (PM). This paper aims to reduce the overhead of the PMs. For the allocation, Modified Best Fit Decreasing (MBFD) algorithm is used to check the resources availability. For the allocation, Modified Best Fit Decreasing (MBFD) algorithm is used to check the resources availability. Genetic Algorithm (GA) has been used to optimize the MBFD performance by fitness function. For the cross-validation Polynomial Support Vector Machine (P-SVM) is used. It has been utilized for training and classification and accordingly, parameters, viz. (Service Level Agreement) SLA and Job Completion Ratio (JCR) are evaluated. A comparative analysis has been drawn in this article to depict the research work effectiveness and an improvement of 70% is perceived.


2020 ◽  
Vol 17 (11) ◽  
pp. 5003-5009
Author(s):  
Puneet Banga ◽  
Sanjeev Rana

Due to constraints along with profit margins in background, service provider’s sometime neglect to feed essential services to their respective clients. Such compulsion raises the demand for efficient task scheduling that can meet multiple objectives. But without any prior agreement, again makes a casual approach. So this dispute can be addressed when competent scheduling executes right over the Service Level Agreement. It acts as hotspots to define set of rules to assure quality of service. At this time, there is a huge demand of SLA opted scheduling that can produce profitable results from provider’s and client’s as well. This article presents a fundamental approach that can be applied to existing scheduling techniques on the fly. Result shows drastic improvement in terms of average waiting time, average turnaround time without comprising provider’s cost margin at all along with fairness.


Author(s):  
Paulo Oliveira Siqueira Junior ◽  
Manoel Henrique Reis Nascimento ◽  
Ítalo Rodrigo Soares Silva ◽  
Ricardo Silva Parente ◽  
Milton Fonseca Júnior ◽  
...  

With the expansion of means of river transportation, especially in the case of small and medium-sized vessels that make routes of greater distances, the cost of fuel, if not taken as an analysis criterion for a larger profit margin, is considered to be a primary factor , considering that the value of fuel specifically diesel to power internal combustion machines is high. Therefore, the use of tools that assist in decision-making becomes necessary, as is the case of the present research, which aims to contribute with a computational model of prediction and optimization of the best speed to decrease the fuel cost considering the characteristics of the SCANIA 315 machine. propulsion model, of a vessel from the river port of Manaus that carries out river transportation to several municipalities in Amazonas. According to the results of the simulations, the best training algorithm of the Artificial Neural Network (ANN) was the BFGS Quasi-Newton considering the characteristics of the engine for optimization with Genetic Algorithm (AG).


2017 ◽  
Vol 10 (1) ◽  
pp. 60-65
Author(s):  
Ronak Vihol ◽  
Hiren Patel ◽  
Nimisha Patel

Offering “Computing as a utility” on pay per use plan, Cloud computing has emerged as a technology of ease and flexibility for thousands of users over last few years. Distribution of dynamic workload among available servers and efficient utilization of existing resources in datacenter is one of the major concerns in Cloud computing. The load balancing issue needs to take into consideration the utilization of servers, i.e. the resultant utilization should not exceed the preset upper limits to avoid service level agreement (SLA) violation and should not fall beneath stipulated lower limits to avoid keeping some servers in active use. Scheduling of workload is regarded as an optimization problem that considers many varying criterion such as dynamic environment, priority of incoming applications, their deadlines etc. to improve resource utilization and overall performance of Cloud computing. In this work, a Genetic Algorithm (GA) based novel load balancing mechanism is proposed. Though not done in this work, in future, we aim to compare performance of proposed algorithms with existing mechanisms such as first come first serve (FCFS), Round Robin (RR) and other search algorithms through simulations.


The cloud is an online platform that offers services for end-users by ensuring the Quality of services (QoS) of the data. Since, the user’s access data through the internet, therefore problem like Security and confidentiality of cloud data appears. To resolve this problem, encryption mechanism named as Rivest–Shamir–Adleman (RSA) with Triple Data Encryption Standard (DES) approach is used in hybridization. This paper mainly focused on two issues, such as Security and Storage of data. The Security of cloud data is resolved using the encryption approach, whereas, the data storage is performed using Modified Best Fit Decreasing (MBFD) with Whale Optimization algorithm (WOA)&Artificial Neural Network (ANN) approach. The neural network with the whale as an optimization approach model makes sure the high confidentiality of cloud data storage in a managed way. From the experiment, it is analyzed that the proposed cloud system performs better in terms of energy consumption, delay, and Service Level Agreement (SLA) violation.


Author(s):  
Ge Weiqing ◽  
Cui Yanru

Background: In order to make up for the shortcomings of the traditional algorithm, Min-Min and Max-Min algorithm are combined on the basis of the traditional genetic algorithm. Methods: In this paper, a new cloud computing task scheduling algorithm is proposed, which introduces Min-Min and Max-Min algorithm to generate initialization population, and selects task completion time and load balancing as double fitness functions, which improves the quality of initialization population, algorithm search ability and convergence speed. Results: The simulation results show that the algorithm is superior to the traditional genetic algorithm and is an effective cloud computing task scheduling algorithm. Conclusion: Finally, this paper proposes the possibility of the fusion of the two quadratively improved algorithms and completes the preliminary fusion of the algorithm, but the simulation results of the new algorithm are not ideal and need to be further studied.


Sign in / Sign up

Export Citation Format

Share Document