scholarly journals Model for Estimation of the Data Center Response Time

2014 ◽  
Vol 15 (09) ◽  
Author(s):  
Yuri Nesterov
Keyword(s):  

Load balancing algorithms and service broker policies plays a crucial role in determining the performance of cloud systems. User response time and data center request servicing time are largely affected by the load balancing algorithms and service broker policies. Several load balancing algorithms and service broker polices exist in the literature to perform the data center allocation and virtual machine allocation for the given set of user requests. In this paper, we investigate the performance of equally spread current execution (ESCE) based load balancing algorithm with closest data center(CDC) service broker policy in a cloud environment that consists of homogeneous and heterogeneous device characteristics in data centers and heterogeneous communication bandwidth that exist between different regions where cloud data centers are deployed. We performed a simulation using CloudAnalyst an open source software with different settings of device characteristics and bandwidth. The user response time and data center request servicing time are found considerably less in heterogeneous environment.


2019 ◽  
Vol 16 (4) ◽  
pp. 627-637
Author(s):  
Sanaz Hosseinzadeh Sabeti ◽  
Maryam Mollabgher

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


2013 ◽  
Vol 336-338 ◽  
pp. 2549-2554
Author(s):  
Jian Xiang Li ◽  
Xiang Zhen Kong ◽  
Yi Nan Lv

Power provision is coming to be the most important constraint to data center development, how to efficiently manage power consumption according to the loads of the data center is urgent. In this paper, we provide the Request-Response Hierarchical Power Management (RRHPM) model for data center, and based on queuing theory, analyse the performance and constraints of two strategies hierarchical structure implement of RRHPM. Numerical results show that the Equal Utilization Strategy has less average response time, can manage more service nodes with the same response time threshold, and require less power management nodes than popular Equal Degree Strategy.


Author(s):  
Shu Zhang ◽  
Yu Han ◽  
Nishi Ahuja ◽  
Xiaohong Liu ◽  
Huahua Ren ◽  
...  

In recent years, the internet services industry has been developing rapidly. Accordingly, the demands for compute and storage capacity continue to increase and internet data centers are consuming more power than ever before to provide this capacity. Based on the Forest & Sullivan market survey, data centers across the globe now consume around 100GWh of power and this consumption is expected to increase 30% by 2016. With development expanding, IDC (Internet Data Center) owners realize that small improvements in efficiency, from architecture design to daily operations, will yield large cost reduction benefits over time. Cooling energy is a significant part of the daily operational expense of an IDC. One trend in this industry is to raise the operational temperature of an IDC, which also means running IT equipment at HTA (Higher Ambient Temperature) environment. This might also include cooling improvements such as water-side or air-side economizers which can be used in place of traditional closed loop CRAC (Computer room air conditioner) systems. But just raising the ambient inlet air temperature cannot be done by itself without looking at more effective ways of managing cooling control and considering the thermal safety. An important trend seen in industry today is customized design for IT (Information Technology) equipment and IDC infrastructure from the cloud service provider. This trend brings an opportunity to consider IT and IDC together when designing and IDC, from the early design phase to the daily operation phase, when facing the challenge of improving efficiency. This trend also provides a chance to get more potential benefit out of higher operational temperature. The advantages and key components that make up a customized rack server design include reduced power consumption, more thermal margin with less fan power, and accurate thermal monitoring, etc. Accordingly, the specific IDC infrastructure can be re-designed to meet high temperature operations. To raise the supply air temperature always means less thermal headroom for IT equipment. IDC operators will have less responses time with large power variations or any IDC failures happen. This paper introduces a new solution called ODC (on-demand cooling) with PTAS (Power Thermal Aware Solution) technology to deal with these challenges. ODC solution use the real time thermal data of IT equipment as the key input data for the cooling controls versus traditional ceiling installed sensors. It helps to improve the cooling control accuracy, decrease the response time and reduce temperature variation. By establishing a smart thermal operation with characteristics like direct feedback, accurate control and quick response, HTA can safely be achieved with confidence. The results of real demo testing show that, with real time thermal information, temperature oscillation and response time can be reduced effectively.


2019 ◽  
Vol 8 (4) ◽  
pp. 11915-11921

Cloud Computing is the most adoptable technology in use recently where bigger and complex applications are deployed and used on cloud server. Multi scaled applications are such applications which have multiple attributes (data size, number of requests, number of concurrent users etc.) that can cause to poor performance of its application service. For better performance, it needs to be measured, analyzed and optimized using tools, strategies or algorithms. The multi scale application’s performance is measured at end user level, network level and data center level. In this paper, the focus is measuring the performance of multi scale applications in terms of response time and throughput using CloudAnalyst simulator. Multiple scenarioslike varying data size,varying number of concurrent users and varying number of requests are simulated for multiscale application and tested.The results obtained shown that response time and throughput of application gets reflected mostly by Data Size, number of requests per user and number of concurrent users. It is also observed that the most significant factor is data size along with huge volume of concurrent users, impacts the response time of multi scale application.Hench this research recommends that multi scale data I.e. data type and data size plays vital role in cloud based multi scale application’s performance measurement.


2019 ◽  
Vol 15 (1) ◽  
pp. 84-100 ◽  
Author(s):  
N. Thilagavathi ◽  
D. Divya Dharani ◽  
R. Sasilekha ◽  
Vasundhara Suruliandi ◽  
V. Rhymend Uthariaraj

Cloud computing has seen tremendous growth in recent days. As a result of this, there has been a great increase in the growth of data centers all over the world. These data centers consume a lot of energy, resulting in high operating costs. The imbalance in load distribution among the servers in the data center results in increased energy consumption. Server consolidation can be handled by migrating all virtual machines in those underutilized servers. Migration causes performance degradation of the job, based on the migration time and number of migrations. Considering these aspects, the proposed clustering agent-based model improves energy saving by efficient allocation of the VMs to the hosting servers, which reduces the response time for initial allocation. Middle VM migration (MVM) strategy for server consolidation minimizes the number of VM migrations. Further, randomization of extra resource requirement done to cater to real-time scenarios needs more resource requirements than the initial requirement. Simulation results show that the proposed approach reduces the number of migrations and response time for user request and improves energy saving in the cloud environment.


2019 ◽  
Vol 28 (2) ◽  
pp. 298-339
Author(s):  
Dima Mansour ◽  
Haidar Osman ◽  
Christian Tschudin

AbstractLoad balancing is a mechanism to distribute client requests among several service instances. It enables resource utilization, lowers response time, and increases user satisfaction. In Named-Data Networking (NDN) and NDN-like architectures, load balancing becomes crucial when dynamic services are present, where relying solely on forwarding strategies can overload certain service instances while others are underutilized especially with the limited benefit of on-path caching when it comes to services. To understand the challenges and opportunities of load balancing in NDN, we analyze conventional load balancing in IP networks, and three closely related fields in NDN: congestion control, forwarding strategies, and data center management. We identify three possible scenarios for load balancing in NDN: facade load balancer, controller for Interest queues, and router-based load balancing. These different solutions use different metrics to identify the load on replicas, have different compliance levels with NDN, and place the load balancing functionality in different network components. From our findings, we propose and implement a new lightweight router-based load balancing approach called the communicating vessels and experimentally show how it reduces service response time and senses server capabilities without probing.


2021 ◽  
Vol 17 (1) ◽  
pp. 59-82
Author(s):  
Mostefa Hamdani ◽  
Youcef Aklouf

With the rapid development of data and IT technology, cloud computing is gaining more and more attention, and many users are attracted to this paradigm because of the reduction in cost and the dynamic allocation of resources. Load balancing is one of the main challenges in cloud computing system. It redistributes workloads across computing nodes within cloud to minimize computation time, and to improve the use of resources. This paper proposes an enhanced ‘Active VM load balancing algorithm’ based on fuzzy logic and k-means clustering to reduce the data center transfer cost, the total virtual machine cost, the data center processing time and the response time. The proposed method is realized using Java and CloudAnalyst Simulator. Besides, we have compared the proposed algorithm with other task scheduling approaches such as Round Robin algorithm, Throttled algorithm, Equally Spread Current Execution Load algorithm, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). As a result, the proposed algorithm performs better in terms of service rate and response time.


2021 ◽  
Vol 11 (1) ◽  
pp. 93-111
Author(s):  
Deepak Kapgate

The quality of cloud computing services is evaluated based on various performance metrics out of which response time (RT) is most important. Nearly all cloud users demand its application's RT as minimum as possible, so to minimize overall system RT, the authors have proposed request response time prediction-based data center (DC) selection algorithm in this work. Proposed DC selection algorithm uses results of optimization function for DC selection formulated based on M/M/m queuing theory, as present cloud scenario roughly obeys M/M/m queuing model. In cloud environment, DC selection algorithms are assessed based on their performance in practice, rather than how they are supposed to be used. Hence, explained DC selection algorithm with various forecasting models is evaluated for minimum user application RT and RT prediction accuracy on various job arrival rates, real parallel workload types, and forecasting model training set length. Finally, performance of proposed DC selection algorithm with optimal forecasting model is compared with other DC selection algorithms on various cloud configurations.


Sign in / Sign up

Export Citation Format

Share Document