Towards an optimal controllers' response time in a software‐defined network‐enabled data center

Author(s):  
Mohamed Janati Idrissi ◽  
Mohammed Raiss El‐Fenni

Load balancing algorithms and service broker policies plays a crucial role in determining the performance of cloud systems. User response time and data center request servicing time are largely affected by the load balancing algorithms and service broker policies. Several load balancing algorithms and service broker polices exist in the literature to perform the data center allocation and virtual machine allocation for the given set of user requests. In this paper, we investigate the performance of equally spread current execution (ESCE) based load balancing algorithm with closest data center(CDC) service broker policy in a cloud environment that consists of homogeneous and heterogeneous device characteristics in data centers and heterogeneous communication bandwidth that exist between different regions where cloud data centers are deployed. We performed a simulation using CloudAnalyst an open source software with different settings of device characteristics and bandwidth. The user response time and data center request servicing time are found considerably less in heterogeneous environment.


2019 ◽  
Vol 16 (4) ◽  
pp. 627-637
Author(s):  
Sanaz Hosseinzadeh Sabeti ◽  
Maryam Mollabgher

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


2013 ◽  
Vol 336-338 ◽  
pp. 2549-2554
Author(s):  
Jian Xiang Li ◽  
Xiang Zhen Kong ◽  
Yi Nan Lv

Power provision is coming to be the most important constraint to data center development, how to efficiently manage power consumption according to the loads of the data center is urgent. In this paper, we provide the Request-Response Hierarchical Power Management (RRHPM) model for data center, and based on queuing theory, analyse the performance and constraints of two strategies hierarchical structure implement of RRHPM. Numerical results show that the Equal Utilization Strategy has less average response time, can manage more service nodes with the same response time threshold, and require less power management nodes than popular Equal Degree Strategy.


Author(s):  
Shu Zhang ◽  
Yu Han ◽  
Nishi Ahuja ◽  
Xiaohong Liu ◽  
Huahua Ren ◽  
...  

In recent years, the internet services industry has been developing rapidly. Accordingly, the demands for compute and storage capacity continue to increase and internet data centers are consuming more power than ever before to provide this capacity. Based on the Forest & Sullivan market survey, data centers across the globe now consume around 100GWh of power and this consumption is expected to increase 30% by 2016. With development expanding, IDC (Internet Data Center) owners realize that small improvements in efficiency, from architecture design to daily operations, will yield large cost reduction benefits over time. Cooling energy is a significant part of the daily operational expense of an IDC. One trend in this industry is to raise the operational temperature of an IDC, which also means running IT equipment at HTA (Higher Ambient Temperature) environment. This might also include cooling improvements such as water-side or air-side economizers which can be used in place of traditional closed loop CRAC (Computer room air conditioner) systems. But just raising the ambient inlet air temperature cannot be done by itself without looking at more effective ways of managing cooling control and considering the thermal safety. An important trend seen in industry today is customized design for IT (Information Technology) equipment and IDC infrastructure from the cloud service provider. This trend brings an opportunity to consider IT and IDC together when designing and IDC, from the early design phase to the daily operation phase, when facing the challenge of improving efficiency. This trend also provides a chance to get more potential benefit out of higher operational temperature. The advantages and key components that make up a customized rack server design include reduced power consumption, more thermal margin with less fan power, and accurate thermal monitoring, etc. Accordingly, the specific IDC infrastructure can be re-designed to meet high temperature operations. To raise the supply air temperature always means less thermal headroom for IT equipment. IDC operators will have less responses time with large power variations or any IDC failures happen. This paper introduces a new solution called ODC (on-demand cooling) with PTAS (Power Thermal Aware Solution) technology to deal with these challenges. ODC solution use the real time thermal data of IT equipment as the key input data for the cooling controls versus traditional ceiling installed sensors. It helps to improve the cooling control accuracy, decrease the response time and reduce temperature variation. By establishing a smart thermal operation with characteristics like direct feedback, accurate control and quick response, HTA can safely be achieved with confidence. The results of real demo testing show that, with real time thermal information, temperature oscillation and response time can be reduced effectively.


2018 ◽  
Vol 10 (3) ◽  
pp. 157 ◽  
Author(s):  
Ramadhika Dewanto ◽  
Rendy Munadi ◽  
Ridha Muldina Negara

Equal Cost Multipath Routing (ECMP) is a routing application where all available paths between two nodes is utilized by statically mapping each path to possible traffics between source and destination hosts in a network. This configuration can lead to congestion if there are two or more traffics being transmitted into paths with overlapping links, despite the availability of less busy paths. Software Defined Networking (SDN) has the ability to increase the dynamicity of ECMP by allowing controller to monitor available bandwidths of all links in the network in real-time. The measured bandwidth is then implemented as the basis of the calculation to determine which path a traffic will take.  In this research, a SDN-based ECMP application that can prevent network congestion is made by measuring available bandwidth of each available paths beforehand, thus making different traffics transmitted on non-overlapped paths as much as possible. The proposed scheme increased the throughput by 14.21% and decreased the delay by 99% in comparison to standard ECMP when congestion occurs and has 75.2% lower load standard deviation in comparison to round robin load balancer.


Sign in / Sign up

Export Citation Format

Share Document