Performance Evaluation of Cloud Systems by Switching the Virtual Machines Power Mode Between the Sleep Mode and Active Mode

Author(s):  
Sudhansu Shekhar Patra ◽  
Veena Goswami ◽  
G. B. Mund

Data centers are cost-effective infrastructures for storing large volumes of data and hosting large scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. In this chapter, we study energy savings of data centers by consolidation and switching off of those virtual machines which are not in use. According to this policy, c virtual machines continue serving the customer until the number of idle servers attains the threshold level d; then d idle servers take synchronous vacation simultaneously, otherwise these servers begin serving the customers. Numerical results are provided to demonstrate the applicability of the proposed model for the data center management in particular, to quantify theoretically the tradeoff between the conflicting aims of energy efficiency and Quality of Service (QoS) requirements specified by cloud tenants.

2018 ◽  
Vol 5 (2) ◽  
pp. 1-20
Author(s):  
Sudhansu Shekhar Patra ◽  
Veena Goswami

Due to the advancements in virtualization technology, it is now an up and coming field and has become a more appealing area of internet technology. Since there is a rapid growth for the demand of computational power increases by scientific, business, and web-applications, it leads to the creation of large-scale data centers. These data centers consume enormous amounts of electrical power. In this article, the authors study energy saving methods by consolidation and by switching off those virtual machines which are not in use. According to this policy, c virtual machines continue serving the customer until the number of idle server attains the threshold level d; then d idle servers take synchronous vacation simultaneously, otherwise these servers would begin serving the customers. Numerical results are provided to demonstrate the applicability of the proposed model for data center management in particular, to quantify the tradeoff theoretically between the conflicting aims of energy efficiency and QoS.


2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


2018 ◽  
Vol 11 (2) ◽  
pp. 88-109
Author(s):  
Devki Nandan Jha ◽  
Deo Prakash Vidyarthi

Cloud computing is a technological advancement that provides services in the form of utility on a pay-per-use basis. As the cloud market is expanding, numerous service providers are joining the cloud platform with their services. This creates an indecision amongst the users to choose an appropriate service provider especially when the cloud provider provisions diverse type of virtual machines. The problem becomes more challenging when the user has different jobs requiring specific quality of service. To address the aforementioned problem, this article applies a hybrid heuristic using College Admission Problem and Analytical Hierarchical Process for stable matching of the users' job with the cloud's virtual machines. The case study depicts the effectiveness of the proposed model.


2018 ◽  
Vol 8 (4) ◽  
pp. 118-133 ◽  
Author(s):  
Fahim Youssef ◽  
Ben Lahmar El Habib ◽  
Rahhali Hamza ◽  
Labriji El Houssine ◽  
Eddaoui Ahmed ◽  
...  

Cloud users can have access to the service based on “pay as you go.” The daily increase of cloud users may decrease the performance, the availability and the profitability of the material and software resources used in cloud service. These challenges were solved by several load balancing algorithms between the virtual machines of the data centers. In order to determine a new load balancing improvement; this article's discussions will be divided into two research axes. The first, the pre-classification of tasks depending on whether their characteristics are accomplished or not (Notion of Levels). This new technique relies on the modeling of tasks classification based on an ascending order using techniques that calculate the worst-case execution time (WCET). The second, the authors choose distributed datacenters between quasi-similar virtual machines and the modeling of relationship between virtual machines using the pre-scheduling levels is included in the data center in terms of standard mathematical functions that controls this relationship. The key point of the improvement, is considering the current load of the virtual machine of a data center and the pre-estimation of the execution time of a task before any allocation. This contribution allows cloud service providers to improve the performance, availability and maximize the use of virtual machines workload in their data centers.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 852 ◽  
Author(s):  
Sajid Latif ◽  
Syed Mushhad Gilani ◽  
Rana Liaqat Ali ◽  
Misbah Liaqat ◽  
Kwang-Man Ko

The interconnected cloud (Intercloud) federation is an emerging paradigm that revolutionizes the scalable service provision of geographically distributed resources. Large-scale distributed resources require well-coordinated and automated frameworks to facilitate service provision in a seamless and systematic manner. Unquestionably, standalone service providers must communicate and federate their cloud sites with other vendors to enable the infinite pooling of resources. The pooling of these resources provides uninterpretable services to increasingly growing cloud users more efficiently, and ensures an improved Service Level Agreement (SLA). However, the research of Intercloud resource management is in its infancy. Therefore, standard interfaces, protocols, and uniform architectural components need to be developed for seamless interaction among federated clouds. In this study, we propose a distributed meta-brokering-enabled scheduling framework for provision of user application services in the federated cloud environment. Modularized architecture of the proposed system with uniform configuration in participating resource sites orchestrate the critical operations of resource management effectively, and form the federation schema. Overlaid meta-brokering instances are implemented on the top of local resource brokers to keep the global functionality isolated. These instances in overlay topology communicate in a P2P manner to maintain decentralization, high scalability, and load manageability. The proposed framework has been implemented and evaluated by extending the Java-based CloudSim 3.0.3 simulation application programming interfaces (APIs). The presented results validate the proposed model and its efficiency to facilitate user application execution with the desired QoS parameters.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Sara Handouf ◽  
Essaid Sabir

Nowadays, ubiquitous network access has become a reality, thanks to Unmanned Aerial Vehicles (UAVs) that have gained extreme popularity due to their flexible deployment and higher chance of line-of-sight links to ground users. Telecommunication service providers deploy UAVs to provide areal network access in remote rural areas, disaster-affected areas, or massive-attended events (sport venues, festivals, etc.), where full setup to provide temporary wireless coverage would be very expensive. Of course, a UAV is battery-powered with a limited energy budget for both mobility aspect and communication aspect. An efficient solution is to allow UAVs switching their radio modules to the sleep mode in order to extend the battery lifetime. This results in temporary unavailability of communication feature. Within such a situation, the ultimate deal for a UAV operator is to provide a cost-effective service with acceptable availability. This would allow meeting some target quality of service while having a good market share granting satisfactory benefits. In this article, we exhibit a new framework with many interesting insights into how to jointly define the availability and the access cost in UAV-empowered flying access networks to opportunistically cover a target geographical area. Yet, we construct a duopoly model to capture the adversarial behavior of service providers in terms of their pricing policies and their respective availability probabilities. Optimal periodic beaconing (advertising the presence of the UAV) is to be addressed, given the UAVs with limited battery capacity and their recharging constraints. A full analysis of the game, both in terms of equilibrium pricing and equilibrium availability, is derived. We show that the availability-pricing game exhibits some nice features as it is submodular with respect to the availability policy; whereas, it is supermodular with respect to the service fee. Furthermore, we implement a learning scheme using best response dynamics that allows operators to learn their joint pricing-availability strategies in a fast, accurate, and distributed fashion. Extensive simulations show convergence of the proposed scheme to the joint pricing-availability equilibrium and offer promising insights into how the game parameters should be chosen to efficiently control the duopoly game.


2020 ◽  
Vol 17 (9) ◽  
pp. 3904-3906
Author(s):  
Susmita J. A. Nair ◽  
T. R. Gopalakrishnan Nair

Increasing demand of computing resources and the popularity of cloud computing have led the organizations to establish of large-scale data centers. To handle varying workloads, allocating resources to Virtual Machines, placing the VMs in the most suitable physical machine at data centers without violating the Service Level Agreement remains a big challenge for the cloud providers. The energy consumption and performance degradation are the prime focus for the data centers in providing services by strictly following the SLA. In this paper we are suggesting a model for minimizing the energy consumption and performance degradation without violating SLA. The experiments conducted have shown a reduction in SLA violation by nearly 10%.


Author(s):  
Deepika T. ◽  
Prakash P.

The flourishing development of the cloud computing paradigm provides several services in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.


Author(s):  
Bhupesh Kumar Dewangan ◽  
Amit Agarwal ◽  
Venkatadri M. ◽  
Ashutosh Pasricha

Cloud computing is a platform where services are provided through the internet either free of cost or rent basis. Many cloud service providers (CSP) offer cloud services on the rental basis. Due to increasing demand for cloud services, the existing infrastructure needs to be scale. However, the scaling comes at the cost of heavy energy consumption due to the inclusion of a number of data centers, and servers. The extraneous power consumption affects the operating costs, which in turn, affects its users. In addition, CO2 emissions affect the environment as well. Moreover, inadequate allocation of resources like servers, data centers, and virtual machines increases operational costs. This may ultimately lead to customer distraction from the cloud service. In all, an optimal usage of the resources is required. This paper proposes to calculate different multi-objective functions to find the optimal solution for resource utilization and their allocation through an improved Antlion (ALO) algorithm. The proposed method simulated in cloudsim environments, and compute energy consumption for different workloads quantity and it increases the performance of different multi-objectives functions to maximize the resource utilization. It compared with existing frameworks and experiment results shows that the proposed framework performs utmost.


Author(s):  
Hao Lv ◽  
Fu-Ying Dao ◽  
Zheng-Xing Guan ◽  
Hui Yang ◽  
Yan-Wen Li ◽  
...  

Abstract As a newly discovered protein posttranslational modification, histone lysine crotonylation (Kcr) involved in cellular regulation and human diseases. Various proteomics technologies have been developed to detect Kcr sites. However, experimental approaches for identifying Kcr sites are often time-consuming and labor-intensive, which is difficult to widely popularize in large-scale species. Computational approaches are cost-effective and can be used in a high-throughput manner to generate relatively precise identification. In this study, we develop a deep learning-based method termed as Deep-Kcr for Kcr sites prediction by combining sequence-based features, physicochemical property-based features and numerical space-derived information with information gain feature selection. We investigate the performances of convolutional neural network (CNN) and five commonly used classifiers (long short-term memory network, random forest, LogitBoost, naive Bayes and logistic regression) using 10-fold cross-validation and independent set test. Results show that CNN could always display the best performance with high computational efficiency on large dataset. We also compare the Deep-Kcr with other existing tools to demonstrate the excellent predictive power and robustness of our method. Based on the proposed model, a webserver called Deep-Kcr was established and is freely accessible at http://lin-group.cn/server/Deep-Kcr.


Sign in / Sign up

Export Citation Format

Share Document