scholarly journals RCD: Rapid Close to Deadline Scheduling for datacenter networks

2018 ◽  
Author(s):  
Mohammad Noormohammadpour ◽  
Cauligi S. Raghavendra ◽  
Sriram Rao ◽  
Asad M. Madni

Datacenter-based Cloud Computing services provide a flexible, scalable and yet economical infrastructure to host online services such as multimedia streaming, email and bulk storage. Many such services perform geo-replication to provide necessary quality of service and reliability to users resulting in frequent large inter-datacenter transfers. In order to meet tenant service level agreements (SLAs), these transfers have to be completed prior to a deadline. In addition, WAN resources are quite scarce and costly, meaning they should be fully utilized. Several recently proposed schemes, such as B4 [1], TEMPUS [2], and SWAN [3] have focused on improving the utilization of inter-datacenter transfers through centralized scheduling, however, they fail to provide a mechanism to guarantee that admitted requests meet their deadlines. Also, in a recent study, authors propose Amoeba [4], a system that allows tenants to define deadlines and guarantees that the specified deadlines are met, however, to admit new traffic, the proposed system has to modify the allocation of already admitted transfers. In this paper, we propose Rapid Close to Deadline Scheduling (RCD), a close to deadline traffic allocation technique that is fast and efficient. Through simulations, we show that RCD is up to 15 times faster than Amoeba, provides high link utilization along with deadline guarantees, and is able to make quick decisions on whether a new request can be fully satisfied before its deadline.

2012 ◽  
Vol 546-547 ◽  
pp. 1433-1438 ◽  
Author(s):  
Wei Wei Wang ◽  
Jing Li ◽  
Yuan Yuan Guo ◽  
Hu Song ◽  
Xin Chun Liu

On-demand service is one of the most important characteristics of cloud computing. Cloud-computing services should dynamically and timely deliver computing resources, storage resources and network resources etc. to consumers according to user needs, and service level should be able to meet quality of service. Users only need to pay on demand; otherwise they have to maintain sufficient resources just in order to meet peak requirements, which can be costly. In this paper, we present the design and implementation of Auto-Scaling system and illustrate its system architecture, components and scaling algorithm. Finally, we test the system and the results show that it can be capable of handling sudden load surges, delivering resources to users on demand, saving cost for users and improving resource utilization.


2021 ◽  
Vol 14 (1) ◽  
pp. 205979912098776
Author(s):  
Joseph Da Silva

Interviews are an established research method across multiple disciplines. Such interviews are typically transcribed orthographically in order to facilitate analysis. Many novice qualitative researchers’ experiences of manual transcription are that it is tedious and time-consuming, although it is generally accepted within much of the literature that quality of analysis is improved through researchers performing this task themselves. This is despite the potential for the exhausting nature of bulk transcription to conversely have a negative impact upon quality. Other researchers have explored the use of automated methods to ease the task of transcription, more recently using cloud-computing services, but such services present challenges to ensuring confidentiality and privacy of data. In the field of cyber-security, these are particularly concerning; however, any researcher dealing with confidential participant speech should also be uneasy with third-party access to such data. As a result, researchers, particularly early-career researchers and students, may find themselves with no option other than manual transcription. This article presents a secure and effective alternative, building on prior work published in this journal, to present a method that significantly reduced, by more than half, interview transcription time for the researcher yet maintained security of audio data. It presents a comparison between this method and a fully manual method, drawing on data from 10 interviews conducted as part of my doctoral research. The method presented requires an investment in specific equipment which currently only supports the English language.


Author(s):  
Eges Egedigwe

Cloud computing-based technology is becoming increasingly popular as a way to deliver quality education to community colleges, universities, and other organizations. At the same time, compared with other industries, colleges have been slow on implementing and sustaining cloud computing services on an institutional level because of budget constraints facing many large community colleges, in addition to other obstacles. Faced with this challenge, key stakeholders are increasingly realizing the need to focus on service quality as a measure to improve their competitive position in today's highly competitive environment. Considering the amount of study done with cloud computing in education, very little has been done in examining the needs and the satisfactions of the instructor customer. The purpose of this chapter is to examine the expectations and perceptions of instructors' usage of cloud computing based technology on overall quality of service (QoS) in their respective institutions of higher education.


2012 ◽  
Vol 2 (3) ◽  
pp. 86-97
Author(s):  
Veena Goswami ◽  
Sudhansu Shekhar Patra ◽  
G. B. Mund

Cloud computing is a new computing paradigm in which information and computing services can be accessed from a Web browser by clients. Understanding of the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services (QoS) is crucial. Based on the Service level agreement, the requests are processed in the cloud centers in different modes. This paper analyzes a finite-buffer multi-server queuing system where client requests have two arrival modes. It is assumed that each arrival mode is serviced by one or more Virtual machines, and both the modes have equal probabilities of receiving service. Various performance measures are obtained and optimal cost policy is presented with numerical results. The genetic algorithm is employed to search the optimal values of various parameters for the system.


1992 ◽  
Vol 18 (2) ◽  
pp. 139-145 ◽  
Author(s):  
Armin Roeseler ◽  
Anneliese von Mayrhauser

2013 ◽  
Vol 660 ◽  
pp. 196-201 ◽  
Author(s):  
Muhammad Irfan ◽  
Zhu Hong ◽  
Nueraimaiti Aimaier ◽  
Zhu Guo Li

Cloud Computing is not a revolution; it’s an evolution of computer science and technology emerging by leaps and bounds, in order to merge all computer science tools and technologies. Cloud Computing technology is hottest to do research and explore new horizons of next generations of Computer Science. There are number of cloud services providers (Amazon EC2), Rackspace Cloud, Terremark and Google Compute Engine) but still enterprises and common users have a number of concerns over cloud service providers. Still there is lot of weakness, challenges and issues are barrier for cloud service providers in order to provide cloud services according to SLA (Service Level agreement). Especially, service provisioning according to SLAs is core objective of each cloud service provider with maximum performance as per SLA. We have identified those challenges issues, as well as proposed new methodology as “SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services”. Currently, cloud service providers are using “orchestrations” fully or partially to automate service provisioning but we are trying to integrate and drive orchestration flows from SLAs. It would be new approach to provision cloud service and deliver cloud service as per SLA, satisfying QoS standards.


2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Chi-Hua Chen ◽  
Hui-Fei Lin ◽  
Hsu-Chia Chang ◽  
Ping-Hsien Ho ◽  
Chi-Chun Lo

Cloud computing has become a popular topic for exploration in both academic and industrial research in recent years. In this paper, network behavior is analyzed to assess and compare the costs and risks associated with traditional local servers versus those associated with cloud computing to determine the appropriate deployment strategy. An analytic framework of a deployment strategy that involves two mathematical models and the analytical hierarchy process is proposed to analyze the costs and service level agreements of services involving using traditional local servers and platform as service platforms in the cloud. Two websites are used as test sites to analyze the costs and risks of deploying services inGoogle App Engine(GAE) (1) the website ofInformation and Finance of Management(IFM) at theNational Chiao Tung University(NCTU) and (2) the NCTU website. If the examined websites were deployed in GAE, NCTU would save over 83.34% of the costs associated with using a traditional local server with low risk. Therefore, both the IFM and NCTU websites can be served appropriately in the cloud. Based on this strategy, a suggestion is proposed for managers and professionals.


2017 ◽  
Vol 2 (6) ◽  
pp. 1-6
Author(s):  
Arash Mazidi ◽  
Elham Damghanijazi ◽  
Sajad Tofighy

The cloud computing has given services to the users throughout the world during recent years. The cloud computing services have been founded according to ‘As-Pay-You-Go’ model and some leading enterprises give these services. The giving these cloud-computing services has been developed every day and these requirements necessitate for more infrastructures and Internet providers (IPs). The nodes of data centers consume a lot of energy in cloud structure and disseminate noticeable amount of carbon dioxide into the environment. We define a framework and structure for cloud environment of efficient energy in the present paper. We examine the present problems and challenges based on this structure and then present and model management algorithms and source allocation in cloud computing environment in order to manage energy in addition to considering Service Level Agreement. The proposed algorithm has been implemented by cloudsim simulator where the obtained results from simulation of real-time data indicate that the proposed method is superior to previous techniques in terms of energy consumption and observance of Service Level Agreement. Similarly, number of live migration of virtual machines and quantity of transferred data has been improved.


Sign in / Sign up

Export Citation Format

Share Document