scholarly journals A survey on application of machine learning to manage the virtual machines in cloud computing

2020 ◽  
Vol 11 (3) ◽  
pp. 197-208
Author(s):  
Varun Barthwal ◽  
M.M.S. Rauthan ◽  
Rohan Varma

AbstractVirtual machine (VM) management is a fundamental challenge in the cloud datacenter, as it requires not only scheduling and placement, but also optimization of the method to maintain the energy cost and service quality. This paper reviews the different areas of literature that deal with the resource utilization prediction, VM migration, VM placement and the selection of physical machines (PMs) for hosting the VMs. The main features of VM management policies were also examined using a comparative analysis of the current policies. Many research works include Machine Learning (ML) for detecting the PM overloading, the selection of VMs from over-utilized PM and VM placement as the main activities. This article aims to identify and classify research done in the area of scheduling and placement of VMs using the ML with resource utilization history. Energy efficiency, VM migration counts and Service quality were the key performance parameters that were used to assess the performance of the cloud datacenter.

Author(s):  
Vijayakumar Polepally ◽  
K. Shahu Chatrapati

With the advancement in the science and technology, cloud computing has become a recent trend in environment with immense requirement of infrastructure and resources. Load balancing of cloud computing environments is an important matter of concern. The migration of the overloaded virtual machines (VMs) to the underloaded VM with optimized resource utilization is the effective way of the load balancing. In this paper, a new VM migration algorithm for the load balancing in the cloud is proposed. The migration algorithm proposed (EGSA-VMM) is based on exponential gravitational search algorithm which is the integration of gravitational search algorithm and exponential weighted moving average theory. In our approach, the migration is done based on the migration cost and QoS. The experimentation of proposed EGSA-based VM migration algorithm is compared with ACO and GSA. The simulation of experiments shows that the proposed EGSA-VMM algorithm achieves load balancing and reasonable resource utilization, which outperforms existing migration strategies in terms of number of VM migrations and number of SLA violations.


2017 ◽  
Vol 26 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Hana Teyeb ◽  
Nejib Ben Hadj-Alouane ◽  
Samir Tata ◽  
Ali Balma

In geo-distributed cloud systems, a key challenge faced by cloud providers is to optimally tune and configure the underlying cloud infrastructure. An important problem in this context, deals with finding an optimal virtual machine (VM) placement, minimizing costs, while at the same time, ensuring good system performance. Moreover, due to the fluctuations of demand and traffic patterns, it is crucial to dynamically adjust the VM placement scheme over time. It should be noted that most of the existing studies, however, dealt with this problem either by ignoring its dynamic aspect or by proposing solutions that are not suitable for a geographically distributed cloud infrastructure. In this paper, exact as well as heuristic solutions based on Integer Linear programming (ILP) formulations are proposed. Our work focuses also on the problem of scheduling the VM migration by finding the best migration sequence of intercommunicating VMs that minimizes the resulting traffic on the backbone network. The proposed algorithms execute within a reasonable time frame to readjust VM placement scheme according to the perceived demand. Our aim is to use VM migration as a tool for dynamically adjusting the VM placement scheme while minimizing the network traffic generated by VM communication and migration. Finally, we demonstrate the effectiveness of our proposed algorithms by performing extensive experiments and simulation.


2021 ◽  
Vol 33 (2) ◽  
pp. 17-35
Author(s):  
Sridharan R. ◽  
Domnic S.

Due to pay-as-you-go style adopted by cloud datacenters (DC), modern day applications having intercommunicating tasks depend on DC for their computing power. Due to unpredictability of rate at which data arrives for immediate processing, application performance depends on autoscaling service of DC. Normal VM placement schemes place these tasks arbitrarily onto different physical machines (PM) leading to unwanted network traffic resulting in poor application performance and increases the DC operating cost. This paper formulates autoscaling and intercommunication aware task placements (AIATP) as an optimization problem, with additional constraints and proposes solution, which uses the placement knowledge of prior tasks of individual applications. When compared with well-known algorithms, CloudsimPlus-based simulation demonstrates that AIATP reduces the resource fragmentation (30%) and increases the resource utilization (18%) leading to minimal number of active PMs. AIATP places 90% tasks of an application together and thus reduces the number of VM migration (39%) while balancing the PMs.


Now a day Energy Consumption is one of the most promising fields amongst several computing services of cloud computing. A maximum amount of Power resources are absorbed by the data centre because of huge amount of data processing which is increased abnormally. So it’s the time to think about the energy consumption in cloud environment. Existing Energy Consumption systems are limited in terms of virtualization because improper virtualization leads to loads imbalance and excessive power consumption and inefficiency in terms of computational power. Billing[1,2 ] is another exciting feature that is closely related to energy consumption, because higher or lesser billing depends on energy consumption somehow-as we know that cloud providers allow cloud users to access resources as pay-per-use, so these resources need to be optimally selected to process the user request to maximize user satisfaction in the distributed virtualized environment. There may be an inequity between the actual power consumption by the users and the provided billing records by the providers, So any false accusation that may claimed by each other to get illegal compensations. To avoid such accusation, we propose a work to consolidate the VMs using the Power Management as a Service (PMaaS) model in such a way, to reduce power consumption by maximum resource utilization without live-migration of the virtual machines by using the concept of Virtual Servers. The proposed PMaaS model uses a new “Auto-fit VM placement algorithm”, which computes tasks resource demands, models a Virtual Machine that fits those demands, and places the Virtual Machines on a Virtual server made by the collective resources (CPU, Memory, Storage and Bandwidth) from the respective schedulers directly connected to the actual physical servers and that has the minimum remaining resources which is large enough to accommodate such a Virtual Machine.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4071-4075

Cloud computing is defined as the resource that can be delivered or accessed by the local host from the remote server via the internet. Cloud providers typically use a "pay-as-you-go" model. The evolution of cloud computing has led to the evolution of modern environment due to abundance and advancement of computing and communication infrastructure. During user request, and system response generation, an amount load will be assigned in the cloud computing, where it may be over or under load. Due to heavy load, power consumption and energy management problems are created, and it makes system failure and lead data loss. Though, an efficient load balancing method is compulsory to overcome all mentioned problems. The objective of this work is to develop a metaheuristic load balancing algorithm to migrate multi-server for load balancing and machine learning techniques is used to increase the cloud resource utilization and minimize the make-span time of the task. Using an unsupervised machine learning technique, it is possible to predict the correct response time and waiting time of the servers by getting the prior knowledge about the virtual machines and its clusters. And this work involves to calculate the accuracy rate of the Round-Robin load balancing algorithm and then compared it with a proposed algorithm. By this work, the response time and waiting time can be minimized and also it increases the resource utilization and minimize the make- span time of the task.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


2021 ◽  
pp. 0887302X2199594
Author(s):  
Ahyoung Han ◽  
Jihoon Kim ◽  
Jaehong Ahn

Fashion color trends are an essential marketing element that directly affect brand sales. Organizations such as Pantone have global authority over professional color standards by annually forecasting color palettes. However, the question remains whether fashion designers apply these colors in fashion shows that guide seasonal fashion trends. This study analyzed image data from fashion collections through machine learning to obtain measurable results by web-scraping catwalk images, separating body and clothing elements via machine learning, defining a selection of color chips using k-means algorithms, and analyzing the similarity between the Pantone color palette (16 colors) and the analysis color chips. The gap between the Pantone trends and the colors used in fashion collections were quantitatively analyzed and found to be significant. This study indicates the potential of machine learning within the fashion industry to guide production and suggests further research expand on other design variables.


2021 ◽  
Vol 48 (4) ◽  
pp. 41-44
Author(s):  
Dena Markudova ◽  
Martino Trevisan ◽  
Paolo Garza ◽  
Michela Meo ◽  
Maurizio M. Munafo ◽  
...  

With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.


2021 ◽  
Vol 23 (4) ◽  
pp. 2742-2752
Author(s):  
Tamar L. Greaves ◽  
Karin S. Schaffarczyk McHale ◽  
Raphael F. Burkart-Radke ◽  
Jason B. Harper ◽  
Tu C. Le

Machine learning models were developed for an organic reaction in ionic liquids and validated on a selection of ionic liquids.


Sign in / Sign up

Export Citation Format

Share Document