scholarly journals User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Zhe Zhang ◽  
Ying Li

Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as anM/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

2020 ◽  
Vol 17 (4) ◽  
pp. 1990-1998
Author(s):  
R. Valarmathi ◽  
T. Sheela

Cloud computing is a powerful technology of computing which renders flexible services anywhere to the user. Resource management and task scheduling are essential perspectives of cloud computing. One of the main problems of cloud computing was task scheduling. Usually task scheduling and resource management in cloud is a tough optimization issue at the time of considering quality of service needs. Huge works under task scheduling focuses only on deadline issues and cost optimization and it avoids the significance of availability, robustness and reliability. The main purpose of this study is to develop an Optimized Algorithm for Efficient Resource Allocation and Scheduling in Cloud Environment. This study uses PSO and R factor algorithm. The main aim of PSO algorithm is that tasks are scheduled to VM (virtual machines) to reduce the time of waiting and throughput of system. PSO is a technique inspired by social and collective behavior of animal swarms in nature and wherein particles search the problem space to predict near optimal or optimal solution. A hybrid algorithm combining PSO and R-factor has been developed with the purpose of reducing the processing time, make span and cost of task execution simultaneously. The test results and simulation reveals that the proposed method offers better efficiency than the previously prevalent approaches.


2020 ◽  
Vol 13 (5) ◽  
pp. 957-964
Author(s):  
Siva Rama Krishna ◽  
Mohammed Ali Hussain

Background: In recent years, the computational memory and energy conservation have become a major problem in cloud computing environment due to the increase in data size and computing resources. Since, most of the different cloud providers offer different cloud services and resources use limited number of user’s applications. Objective: The main objective of this work is to design and implement a cloud resource allocation and resources scheduling model in the cloud environment. Methods: In the proposed model, a novel cloud server to resource management technique is proposed on real-time cloud environment to minimize the cost and time. In this model different types of cloud resources and its services are scheduled using multi-level objective constraint programming. Proposed cloud server-based resource allocation model is based on optimization functions to minimize the resource allocation time and cost. Results: Experimental results proved that the proposed model has high computational resource allocation time and cost compared to the existing resource allocation models. Conclusion: This cloud service and resource optimization model is efficiently implemented and tested in real-time cloud instances with different types of services and resource sets.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


2014 ◽  
Vol 4 (4) ◽  
pp. 1-6 ◽  
Author(s):  
Manisha Malhotra ◽  
Rahul Malhotra

As cloud based services becomes more assorted, resource provisioning becomes more challenges. This is an important issue that how resource may be allocated. The cloud environment offered distinct types of virtual machines and cloud provider distribute those services. This is necessary to adjust the allocation of services with the demand of user. This paper presents an adaptive resource allocation mechanism for efficient parallel processing based on cloud. Using this mechanism the provider's job becomes easier and having the least chance for the wastage of resources and time.


Author(s):  
Dinkan Patel ◽  
Anjuman Ranavadiya

Cloud Computing is a type of Internet model that enables convenient, on-demand resources that can be used rapidly and with minimum effort. Cloud Computing can be IaaS, PaaS or SaaS. Scheduling of these tasks is important so that resources can be utilized efficiently with minimum time which in turn gives better performance. Real time tasks require dynamic scheduling as tasks cannot be known in advance as in static scheduling approach. There are different task scheduling algorithms that can be utilized to increase the performance in real time and performing these on virtual machines can prove to be useful. Here a review of various task scheduling algorithms is done which can be used to perform the task and allocate resources so that performance can be increased.


2021 ◽  
Author(s):  
Jianying Miao

This thesis describes an innovative task scheduling and resource allocation strategy by using thresholds with attributes and amount (TAA) in order to improve the quality of service of cloud computing. In the strategy, attribute-oriented thresholds are set to decide on the acceptance of cloudlets (tasks), and the provisioning of accepted cloudlets on suitable resources represented by virtual machines (VMs,). Experiments are performed in a simulation environment created by Cloudsim that is modified for the experiments. Experimental results indicate that TAA can significantly improve attribute matching between cloudlets and VMs, with average execution time reduced by 30 to 50% compared to a typical non-filtering policy. Moreover, the tradeoff between acceptance rate and task delay, as well as between prioritized and non-prioritized cloudlets, may be adjusted as desired. The filtering type and range and the positioning of thresholds may also be adjusted so as to adapt to the dynamically changing cloud environment.


2020 ◽  
Vol 9 (1) ◽  
pp. 2064-2071

The important goal of cloud computing is to offer larger data center that satisfies the storage requirements of the customer. The entire data can’t be saved in a single server. Cloud provider (CP) has cluster of servers to fulfill the cloud request from various real time applications. The data is fragmented in multiple servers to maintain availability. Since the data request of a customer needs data from various servers, there is a possibility of attaining dead lock. In this paper, an enhanced queuing model is proposed where the cloud request (CR) is received in queuing manner for allocation of resources. A session is created for the CR with the CP resource allocation from cloud severs. This enables to put constraint on the number of CR making a session with CP to avoid resource suppression. The Wait for Resource algorithm is used for allocating the server resources to a CR without deadlock in a session. This enables to forecast the resource requirements prior to resource allocation phase in a session. This makes the dynamic resource allocation efficient and free of deadlock. The results obtained evaluates the proposed model and helps the CP in dynamically choosing the number of server nodes necessary to achieve better performance for an real time application.


2019 ◽  
Vol 9 (1) ◽  
pp. 279-291 ◽  
Author(s):  
Proshikshya Mukherjee ◽  
Prasant Kumar Pattnaik ◽  
Tanmaya Swain ◽  
Amlan Datta

AbstractThis Paper focuses on multi-criteria decision making techniques (MCDMs), especially analytical networking process (ANP) algorithm to design a model in order to minimize the task scheduling cost during implementation using a queuing model in a cloud environment and also deals with minimization of the waiting time of the task. The simulated results of the algorithm give better outcomes as compared to other existing algorithms by 15 percent.


Author(s):  
Hyunwoo Lee ◽  
Seokhyun Chung ◽  
Taesu Cheong ◽  
Sang Song

Kidney exchange programs, which allow a potential living donor whose kidney is incompatible with his or her intended recipient to donate a kidney to another patient in return for a kidney that is compatible for their intended recipient, usually aims to maximize the number of possible kidney exchanges or the total utility of the program. However, the fairness of these exchanges is an issue that has often been ignored. In this paper, as a way to overcome the problems arising in previous studies, we take fairness to be the degree to which individual patient-donor pairs feel satisfied, rather than the extent to which the exchange increases social benefits. A kidney exchange has to occur on the basis of the value of the kidneys themselves because the process is similar to bartering. If the matched kidneys are not of the level expected by the patient-donor pairs involved, the match may break and the kidney exchange transplantation may fail. This study attempts to classify possible scenarios for such failures and incorporate these into a stochastic programming framework. We apply a two-stage stochastic programming method using total utility in the first stage and the sum of the penalties for failure in the second stage when an exceptional event occurs. Computational results are provided to demonstrate the improvement of the proposed model compared to that of previous deterministic models.


Sign in / Sign up

Export Citation Format

Share Document