scholarly journals Comparison of Flow Scheduling Policies for Mix of Regular and Deadline Traffic in Datacenter Environments

2017 ◽  
Author(s):  
Mohammad Noormohammadpour ◽  
Cauligi S. Raghavendra

Datacenters are the main infrastructure on top of which cloud computing services are offered. Such infrastructure may be shared by a large number of tenants and applications generating a spectrum of datacenter traffic. Delay sensitive applications and applications with specific Service Level Agreements (SLAs), generate deadline constrained flows, while other applications initiate flows that are desired to be delivered as early as possible. As a result, datacenter traffic is a mix of two types of flows: deadline and regular. There are several scheduling policies for either traffic type with focus on minimizing completion times or deadline miss rate. In this report, we apply several scheduling policies to mix traffic scenario while varying the ratio of regular to deadline traffic. We consider FCFS (First Come First Serve), SRPT (Shortest Remaining Processing Time) and Fair Sharing as deadline agnostic approaches and a combination of Earliest Deadline First (EDF) with either FCFS or SRPT as deadline-aware schemes. In addition, for the latter, we consider both cases of prioritizing deadline traffic (Deadline First) and prioritizing regular traffic (Deadline Last). We study both light-tailed and heavy-tailed flow size distributions and measure mean, median and tail flow completion times (FCT) for regular flows along with Deadline Miss Rate (DMR) and average lateness for deadline flows. We also consider two operation regimes of lightly-loaded (low utilization) and heavily-loaded (high utilization). We find that performance of deadline-aware schemes is highly dependent on fraction of deadline traffic. With light-tailed flow sizes, we find that FCFS performs better in terms of tail times and average lateness while SRPT performs better in average times and deadline miss rate. For heavy-tailed flow sizes, except for tail times, SRPT performs better in all other metrics.

2013 ◽  
Vol 660 ◽  
pp. 196-201 ◽  
Author(s):  
Muhammad Irfan ◽  
Zhu Hong ◽  
Nueraimaiti Aimaier ◽  
Zhu Guo Li

Cloud Computing is not a revolution; it’s an evolution of computer science and technology emerging by leaps and bounds, in order to merge all computer science tools and technologies. Cloud Computing technology is hottest to do research and explore new horizons of next generations of Computer Science. There are number of cloud services providers (Amazon EC2), Rackspace Cloud, Terremark and Google Compute Engine) but still enterprises and common users have a number of concerns over cloud service providers. Still there is lot of weakness, challenges and issues are barrier for cloud service providers in order to provide cloud services according to SLA (Service Level agreement). Especially, service provisioning according to SLAs is core objective of each cloud service provider with maximum performance as per SLA. We have identified those challenges issues, as well as proposed new methodology as “SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services”. Currently, cloud service providers are using “orchestrations” fully or partially to automate service provisioning but we are trying to integrate and drive orchestration flows from SLAs. It would be new approach to provision cloud service and deliver cloud service as per SLA, satisfying QoS standards.


Author(s):  
Mohit Mathur ◽  
◽  
Mamta Madan ◽  
Mohit Chandra Saxena ◽  
◽  
...  

Emerging technologies like IoT (Internet of Things) and wearable devices like Smart Glass, Smart watch, Smart Bracelet and Smart Plaster produce delay sensitive traffic. Cloud computing services are emerging as supportive technologies by providing resources. Most services like IoT require minimum delay which is still an area of research. This paper is an effort towards the minimization of delay in delivering cloud traffic, by geographically localizing the cloud traffic through establishment of Cloud mini data centers. The anticipated architecture suggests a software defined network supported mini data centers connected together. The paper also suggests the use of segment routing for stitching the transport paths between data centers through Software defined Network Controllers.


2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Chi-Hua Chen ◽  
Hui-Fei Lin ◽  
Hsu-Chia Chang ◽  
Ping-Hsien Ho ◽  
Chi-Chun Lo

Cloud computing has become a popular topic for exploration in both academic and industrial research in recent years. In this paper, network behavior is analyzed to assess and compare the costs and risks associated with traditional local servers versus those associated with cloud computing to determine the appropriate deployment strategy. An analytic framework of a deployment strategy that involves two mathematical models and the analytical hierarchy process is proposed to analyze the costs and service level agreements of services involving using traditional local servers and platform as service platforms in the cloud. Two websites are used as test sites to analyze the costs and risks of deploying services inGoogle App Engine(GAE) (1) the website ofInformation and Finance of Management(IFM) at theNational Chiao Tung University(NCTU) and (2) the NCTU website. If the examined websites were deployed in GAE, NCTU would save over 83.34% of the costs associated with using a traditional local server with low risk. Therefore, both the IFM and NCTU websites can be served appropriately in the cloud. Based on this strategy, a suggestion is proposed for managers and professionals.


2017 ◽  
Vol 2 (6) ◽  
pp. 1-6
Author(s):  
Arash Mazidi ◽  
Elham Damghanijazi ◽  
Sajad Tofighy

The cloud computing has given services to the users throughout the world during recent years. The cloud computing services have been founded according to ‘As-Pay-You-Go’ model and some leading enterprises give these services. The giving these cloud-computing services has been developed every day and these requirements necessitate for more infrastructures and Internet providers (IPs). The nodes of data centers consume a lot of energy in cloud structure and disseminate noticeable amount of carbon dioxide into the environment. We define a framework and structure for cloud environment of efficient energy in the present paper. We examine the present problems and challenges based on this structure and then present and model management algorithms and source allocation in cloud computing environment in order to manage energy in addition to considering Service Level Agreement. The proposed algorithm has been implemented by cloudsim simulator where the obtained results from simulation of real-time data indicate that the proposed method is superior to previous techniques in terms of energy consumption and observance of Service Level Agreement. Similarly, number of live migration of virtual machines and quantity of transferred data has been improved.


2021 ◽  
Vol 9 (3) ◽  
pp. 42-51
Author(s):  
Mohammed Tuays Almuqati

Cloud computing has recently emerged as a new model for hosting and delivering services over the internet. Cloud computing has many advantages, such as the ability to increase capacity or add capabilities without the need to invest in new infrastructure. It can also fulfil technological requirements in a fast and automated manner. In recent years, cloud computing has changed the IT industry; in fact, it is one of the industry's fastest growing phenomena. However, as more information about people and businesses becomes available in the cloud, concerns about the safety of this environment will increase. In addition, some challenges to the use of this service exist. This paper presents the results of a survey about cloud computing and outlines the main concepts of the technology along with examples of appropriate usage. It also discusses resource management issues such as service level agreements and highlights the challenges faced by users when choosing cloud computing services.


2018 ◽  
Author(s):  
Mohammad Noormohammadpour ◽  
Cauligi S. Raghavendra ◽  
Sriram Rao ◽  
Asad M. Madni

Datacenter-based Cloud Computing services provide a flexible, scalable and yet economical infrastructure to host online services such as multimedia streaming, email and bulk storage. Many such services perform geo-replication to provide necessary quality of service and reliability to users resulting in frequent large inter-datacenter transfers. In order to meet tenant service level agreements (SLAs), these transfers have to be completed prior to a deadline. In addition, WAN resources are quite scarce and costly, meaning they should be fully utilized. Several recently proposed schemes, such as B4 [1], TEMPUS [2], and SWAN [3] have focused on improving the utilization of inter-datacenter transfers through centralized scheduling, however, they fail to provide a mechanism to guarantee that admitted requests meet their deadlines. Also, in a recent study, authors propose Amoeba [4], a system that allows tenants to define deadlines and guarantees that the specified deadlines are met, however, to admit new traffic, the proposed system has to modify the allocation of already admitted transfers. In this paper, we propose Rapid Close to Deadline Scheduling (RCD), a close to deadline traffic allocation technique that is fast and efficient. Through simulations, we show that RCD is up to 15 times faster than Amoeba, provides high link utilization along with deadline guarantees, and is able to make quick decisions on whether a new request can be fully satisfied before its deadline.


2012 ◽  
Vol 546-547 ◽  
pp. 1433-1438 ◽  
Author(s):  
Wei Wei Wang ◽  
Jing Li ◽  
Yuan Yuan Guo ◽  
Hu Song ◽  
Xin Chun Liu

On-demand service is one of the most important characteristics of cloud computing. Cloud-computing services should dynamically and timely deliver computing resources, storage resources and network resources etc. to consumers according to user needs, and service level should be able to meet quality of service. Users only need to pay on demand; otherwise they have to maintain sufficient resources just in order to meet peak requirements, which can be costly. In this paper, we present the design and implementation of Auto-Scaling system and illustrate its system architecture, components and scaling algorithm. Finally, we test the system and the results show that it can be capable of handling sudden load surges, delivering resources to users on demand, saving cost for users and improving resource utilization.


Author(s):  
Sugandh Bhatia ◽  
Jyoteesh Malhotra

The privacy, handling, management and security of information in a cloud environment are complex and tedious tasks to achieve. With minimum investment and reduced cost of operations an organization can avail and apply the benefits of cloud computing into its business. This computing paradigm is based upon a pay as per your usage model. Moreover, security, privacy, compliance, risk management and service level agreement are critical issues in cloud computing environment. In fact, there is dire need of a model which can tackle and handle all the security and privacy issues. Therefore, we suggest a CSPCR model for evaluating the preparation of an organization to handle or to counter the threats, hazards in cloud computing environment. CSPCR discusses rules and regulations which are considered as pre-requisites in migrating or shifting to cloud computing services.


Sign in / Sign up

Export Citation Format

Share Document