Scheduling System for Cloud Federation across Multi-Data Center

2013 ◽  
Vol 457-458 ◽  
pp. 839-843
Author(s):  
Meng Di Yao ◽  
Dong Lin Chen ◽  
Xin Chen

In the cloud computing federation, the method of cloud computing federation resource scheduling has been introduced to allocate the users requested tasks reasonably to the providers. In the present, single cloud providers method and system of resource scheduling dont apply to the cloud federation environment. Therefore, the solution to the problem of resource scheduling in cloud federation cross data center has become a key technology. The system, mainly about the resource scheduling algorithms across data center, presents the framework and major function of the cloud federation environment cross the datacenter. Furthermore, through Cloudsim, a Web server based system platform was built. Finally, the system proved that it can meet the complicated large-scale demand of users as well as increase the efficiency and profit of resource scheduling among the providers in cloud federation.

2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


2020 ◽  
Vol 2020 ◽  
pp. 1-17 ◽  
Author(s):  
Ibrahim Attiya ◽  
Mohamed Abd Elaziz ◽  
Shengwu Xiong

In recent years, cloud computing technology has attracted extensive attention from both academia and industry. The popularity of cloud computing was originated from its ability to deliver global IT services such as core infrastructure, platforms, and applications to cloud customers over the web. Furthermore, it promises on-demand services with new forms of the pricing package. However, cloud job scheduling is still NP-complete and became more complicated due to some factors such as resource dynamicity and on-demand consumer application requirements. To fill this gap, this paper presents a modified Harris hawks optimization (HHO) algorithm based on the simulated annealing (SA) for scheduling jobs in the cloud environment. In the proposed HHOSA approach, SA is employed as a local search algorithm to improve the rate of convergence and quality of solution generated by the standard HHO algorithm. The performance of the HHOSA method is compared with that of state-of-the-art job scheduling algorithms, by having them all implemented on the CloudSim toolkit. Both standard and synthetic workloads are employed to analyze the performance of the proposed HHOSA algorithm. The obtained results demonstrate that HHOSA can achieve significant reductions in makespan of the job scheduling problem as compared to the standard HHO and other existing scheduling algorithms. Moreover, it converges faster when the search space becomes larger which makes it appropriate for large-scale scheduling problems.


2020 ◽  
Vol 17 (9) ◽  
pp. 4156-4161
Author(s):  
Jeny Varghese ◽  
S. Jagannatha

Cloud Federation is the interconnection of two or more cloud computing settings in order to share configurable processing components such as networks, servers, apps that can be dynamically delivered to customers. Virtualization has been an integral part of cloud computing which provides manageability and utilization of resources. This paper analyses on how jobs of business applications demand and efficiently use the capacity of the resources that are provisioned by the VMs, thereby managing the performance of the applications. The in-depth assessment is based on two large-scale and constant performance traces gathered in a cloud datacenter that host company tools for running distinct apps with regard to requested and used resources.


Author(s):  
Shanshan Yang ◽  
Jinjin Chao

Nowadays, there are too many large-scale speech recognition resources, which makes it difficult to ensure the scheduling speed and accuracy. In order to improve the effect of large-scale speech recognition resource scheduling, a large-scale speech recognition resource scheduling system based on grid computing is designed in this paper. In the hardware part, microprocessor, Ethernet control chip, controller and acquisition card are designed. In the software part of the system, it mainly carries out the retrieval and exchange of information resources, so as to realize the information scheduling of the same type of large-scale speech recognition resources. The experimental results show that the information scheduling time of the designed system is short, up to 2.4min, and the scheduling accuracy is high, up to 90%, in order to provide some help to effectively improve the speed and accuracy of information scheduling.


2018 ◽  
Vol 15 (2) ◽  
pp. 437-445 ◽  
Author(s):  
S. Radha ◽  
C. Nelson Kennedy Babu

At present, the cloud computing is emerging technology to run the large set of data capably, and due to fast data growth, processing of large scale data is becoming a main point of information method and customers can estimate the quality of brands of products employing the information given by new digital marketing channels in social media. Thus, every enterprise requires finding and analyzing a big amount of digital data in order to develop their reputation among the customers. Therefore, in this paper, SLA (Service Level Agreement) based BDAAs (Big Data Analytic Applications) using Adaptive Resource Scheduling and big data with cloud based sentiment analysis is proposed to provide the deep web mining, QoS and to analyze the customer behaviors about the product. In this process, the spatio-temporal compression technique can be applied to data compression for reduction of big data. The data is classified in to positive, negative or neutral by employing the SVM with lexicon dictionary based on the customers' behaviors about brand or products. In cloud computing environment, complex to the reduction of resources cost and fluctuation of resource requirements with BDAAs. As a result, it is needed to have a common Analytics as a Service (AaaS) platform that provides a BDAAs to customers in different fields as unpreserved services in a simple to utilize a way with lower cost. Therefore, SLA based BDAAs is developed to utilize the adaptive resource scheduling depending on the customer behaviors and it can provide visualization and data integrity. Our method can give privacy of cloud owner's information with help of data integrity and authentication process. Experimental results of proposed system shows that the sentiment analysis method for online product using cloud based big data is able to classify the opinions of customers accurately and effective of the algorithm in guarantee of SLA.


Author(s):  
Li Mao ◽  
De Yu Qi ◽  
Wei Wei Lin ◽  
Bo Liu ◽  
Ye Da Li

With the rapid growth of energy consumption in global data centers and IT systems, energy optimization has become an important issue to be solved in cloud data center. By introducing heterogeneous energy constraints of heterogeneous physical servers in cloud computing, an energy-efficient resource scheduling model for heterogeneous physical servers based on constraint satisfaction problems is presented. The method of model solving based on resource equivalence optimization is proposed, in which the resources in the same class are pruning treatment when allocating resource so as to reduce the solution space of the resource allocation model and speed up the model solution. Experimental results show that, compared with DynamicPower and MinPM, the proposed algorithm (EqPower) not only improves the performance of resource allocation, but also reduces energy consumption of cloud data center.


Author(s):  
Qiao SUN ◽  
Chun-guang ZHANG ◽  
Qiong WANG ◽  
Lei SUN ◽  
Lan-mei FU ◽  
...  

2019 ◽  
Vol 5 ◽  
pp. e211
Author(s):  
Hadi Khani ◽  
Hamed Khanmirza

Cloud computing technology has been a game changer in recent years. Cloud computing providers promise cost-effective and on-demand resource computing for their users. Cloud computing providers are running the workloads of users as virtual machines (VMs) in a large-scale data center consisting a few thousands physical servers. Cloud data centers face highly dynamic workloads varying over time and many short tasks that demand quick resource management decisions. These data centers are large scale and the behavior of workload is unpredictable. The incoming VM must be assigned onto the proper physical machine (PM) in order to keep a balance between power consumption and quality of service. The scale and agility of cloud computing data centers are unprecedented so the previous approaches are fruitless. We suggest an analytical model for cloud computing data centers when the number of PMs in the data center is large. In particular, we focus on the assignment of VM onto PMs regardless of their current load. For exponential VM arrival with general distribution sojourn time, the mean power consumption is calculated. Then, we show the minimum power consumption under quality of service constraint will be achieved with randomize assignment of incoming VMs onto PMs. Extensive simulation supports the validity of our analytical model.


Sign in / Sign up

Export Citation Format

Share Document