scholarly journals Threshold based VM Placement Technique for Load Balanced Resource Provisioning using Priority Scheme in Cloud Computing

2021 ◽  
Vol 13 (5) ◽  
pp. 01-18
Author(s):  
Mayank Sohani ◽  
Dr. S. C. Jain

The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.

2016 ◽  
Vol 6 (4) ◽  
pp. 97-110
Author(s):  
Rekha Kashyap ◽  
Deo Prakash Vidyarthi

Virtualization is critical to cloud computing and is possible through hypervisors, which maps the Virtual machines((VMs) to physical resources but poses security concerns as users relinquish physical possession of their computation and data. Good amount of research is initiated for resource provisioning on hypervisors, still many issues need to be addressed for security demanding and real time VMs. First work SRT-CreditScheduler (Secured and Real-time), maximizes the success rate by dynamically prioritizing the urgency and the workload of VMs but ensures highest security for all. Another work, SA-RT-CreditScheduler (Security-aware and Real-time) is a dual objective scheduler, which maximizes the success rate of VMs in best possible security range as specified by the VM owner. Though the algorithms can be used by any hypervisor, for the current work they have been implemented on Xen hypervisor. Their effectiveness is validated by comparing it with Xen's, Credit and SEDF scheduler, for security demanding tasks with stringent deadline constraints.


2019 ◽  
pp. 507-522
Author(s):  
Rekha Kashyap ◽  
Deo Prakash Vidyarthi

Virtualization is critical to cloud computing and is possible through hypervisors, which maps the Virtual machines((VMs) to physical resources but poses security concerns as users relinquish physical possession of their computation and data. Good amount of research is initiated for resource provisioning on hypervisors, still many issues need to be addressed for security demanding and real time VMs. First work SRT-CreditScheduler (Secured and Real-time), maximizes the success rate by dynamically prioritizing the urgency and the workload of VMs but ensures highest security for all. Another work, SA-RT-CreditScheduler (Security-aware and Real-time) is a dual objective scheduler, which maximizes the success rate of VMs in best possible security range as specified by the VM owner. Though the algorithms can be used by any hypervisor, for the current work they have been implemented on Xen hypervisor. Their effectiveness is validated by comparing it with Xen's, Credit and SEDF scheduler, for security demanding tasks with stringent deadline constraints.


2013 ◽  
Vol 4 (1) ◽  
pp. 88-93
Author(s):  
Aarthee S ◽  
Venkatesan R

Cloud computing provides pay-as-you-go computing resources and accessing services are offered from data centers all over the world as the cloud. Consumers may find that cloud computing allows them to reduce the cost of information management as they are not required to own their servers and can use capacity leased from third parties or cloud service providers. Cloud consumers can successfully reduce total cost of resource provisioning using Optimal Cloud Resource Provisioning (OCRP) algorithm in cloud computing environment. The two provisioning plans are reservation and on-demand, used for computing resources which is offered by cloud providers to cloud consumers. The cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since a cloud consumer has to pay to provider in advance. This project proposes that the OCRP algorithm associated with rule based resource manager technique is used to increase the scalability of cloud on-demand services by dynamic placement of virtual machines to reduce the cost and also endow with secure accessing of resources from data centers and parameters like virtualized platforms, data or service management are monitored in the cloud environment.


2014 ◽  
Vol 1008-1009 ◽  
pp. 1513-1516
Author(s):  
Hai Na Song ◽  
Xiao Qing Zhang ◽  
Zhong Tang He

Cloud computing environment is regarded as a kind of multi-tenant computing mode. With virtulization as a support technology, cloud computing realizes the integration of multiple workloads in one server through the package and seperation of virtual machines. Aiming at the contradiction between the heterogeneous applications and uniform shared resource pool, using the idea of bin packing, the multidimensional resource scheduling problem is analyzed in this paper. We carry out some example analysis in one-dimensional resource scheduling, two-dimensional resource schduling and three-dimensional resource scheduling. The results shows that the resource utilization of cloud data centers will be improved greatly when the resource sheduling is conducted after reorganizing rationally the heterogeneous demands.


2021 ◽  
Vol 11 (4) ◽  
pp. 80-99
Author(s):  
Syed Imran Jami ◽  
Siraj Munir

Recent trends in data-intensive experiments require extensive computing and storage resources that are now handled using cloud resources. Industry experts and researchers use cloud-based services and resources to get analytics of their data to avoid inter-organizational issues including power overhead on local machines, cost associated with maintaining and running infrastructure, etc. This article provides detailed review of selected metrics for cloud computing according to the requirements of data science and big data that includes (1) load balancing, (2) resource scheduling, (3) resource allocation, (4) resource sharing, and (5) job scheduling. The major contribution of this review is the inclusion of these metrics collectively which is the first attempt towards evaluating the latest systems in the context of data science. The detailed analysis shows that cloud computing needs research in its association with data-intensive experiments with emphasis on the resource scheduling area.


Author(s):  
Marcus Tanque

Cloud computing consists of three fundamental service models: infrastructure-as-a-service, platform-as-a service and software-as-a-service. The technology “cloud computing” comprises four deployment models: public cloud, private cloud, hybrid cloud and community cloud. This chapter describes the six cloud service and deployment models, the association each of these services and models have with physical/virtual networks. Cloud service models are designed to power storage platforms, infrastructure solutions, provisioning and virtualization. Cloud computing services are developed to support shared network resources, provisioned between physical and virtual networks. These solutions are offered to organizations and consumers as utilities, to support dynamic, static, network and database provisioning processes. Vendors offer these resources to support day-to-day resource provisioning amid physical and virtual machines.


2013 ◽  
Vol 3 (2) ◽  
pp. 35-46 ◽  
Author(s):  
Sandeep K. Sood

Cloud computing has become an innovative computing paradigm, which aims at providing reliable, customized, Quality of Service (QoS) and guaranteed computing infrastructures for users. Efficient resource provisioning is required in cloud for effective resource utilization. For resource provisioning, cloud provides virtualized computing resources that are dynamically scalable. This property of cloud differentiates it from the traditional computing paradigm. But the initialization of a new virtual instance causes a several minutes delay in the hardware resource allocation. Furthermore, cloud provides a fault tolerant service to its clients using the virtualization. But, in order to attain higher resource utilization over this technology, a technique or a strategy is needed using which virtual machines can be deployed over physical machines by predicting its need in advance so that the delay can be avoided. To address these issues, a value based prediction model in this paper is proposed for resource provisioning in which a resource manager is used for dynamically allocating or releasing a virtual machine depending upon the resource usage rate. In order to know the recent resource usage rate, the resource manager uses sliding window to analyze the resource usage rate and to predict the system behavior in advance. By predicting the resource requirements in advance, a lot of processing time can be saved. Earlier, a server has to perform all the calculations regarding the resource usage that in turn wastes a lot of processing power thus decreasing its overall capacity to handle the incoming request. The main feature of the proposed model is that a lot of load is being shifted from the individual server to the resource manager as it performs all the calculations and therefore the server is free to handle the incoming requests to its full capacity.


2019 ◽  
Vol 214 ◽  
pp. 07017
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Olivier Chaze ◽  
...  

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.


Sign in / Sign up

Export Citation Format

Share Document