A Conceptual Framework Towards Implementing a Cloud-Based Dynamic Load Balancer Using a Weighted Round-Robin Algorithm

2020 ◽  
Vol 10 (2) ◽  
pp. 22-35 ◽  
Author(s):  
Sudipta Sahana ◽  
Tanmoy Mukherjee ◽  
Debabrata Sarddar

Cloud load balancing has become one of the most vital aspects of Cloud computing that has captured the attention of IT organizations and business firms in recent years. Among the issues related to this particular aspect, one such issue which needs to be addressed is the issue of effectively serving the clients' requests among multiple servers using an appropriate load balancer. Previous survey papers discussed various issues of cloud load balancing and accordingly devised various methods and techniques to address those issues with the objectives of reduction of processing time and response time along with optimization of costs. In this article, we have discussed an effective load balancing technique using the weighted Round-Robin algorithm which can process the client requests among multiple servers with minimal response time. Considering all these aspects, a cloud-based dynamic load balancer is being used to solve the problem of load balancing in the cloud infrastructure.

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7342
Author(s):  
Bhavya Alankar ◽  
Gaurav Sharma ◽  
Harleen Kaur ◽  
Raul Valverde ◽  
Victor Chang

Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper.


Author(s):  
Noha G. Elnagar ◽  
Ghada F. Elkabbany ◽  
Amr A. Al-Awamry ◽  
Mohamed B. Abdelhalim

<span lang="EN-US">Load balancing is crucial to ensure scalability, reliability, minimize response time, and processing time and maximize resource utilization in cloud computing. However, the load fluctuation accompanied with the distribution of a huge number of requests among a set of virtual machines (VMs) is challenging and needs effective and practical load balancers. In this work, a two listed throttled load balancer (TLT-LB) algorithm is proposed and further simulated using the CloudAnalyst simulator. The TLT-LB algorithm is based on the modification of the conventional TLB algorithm to improve the distribution of the tasks between different VMs. The performance of the TLT-LB algorithm compared to the TLB, round robin (RR), and active monitoring load balancer (AMLB) algorithms has been evaluated using two different configurations. Interestingly, the TLT-LB significantly balances the load between the VMs by reducing the loading gap between the heaviest loaded and the lightest loaded VMs to be 6.45% compared to 68.55% for the TLB and AMLB algorithms. Furthermore, the TLT-LB algorithm considerably reduces the average response time and processing time compared to the TLB, RR, and AMLB algorithms.</span>


2021 ◽  
Vol 6 (1) ◽  
pp. 103
Author(s):  
Hardiyan Kesuma Ramadhan ◽  
Sukma Wardhana

In the digital era and the outbreak of the COVID-19 pandemic, all activities are online. If the number of users accessing the server exceeds IT infrastructure, server down occurs. A load balancer device is required to share the traffic request load. This study compares four algorithms on Citrix ADC VPX load balancer: round-robin, least connection, least response time and least packet using GNS3. The results of testing response time and throughput parameters show that the least connection algorithm is superior. There were a 33% reduction in response time and a 53% increase in throughput. In the service hits parameter, the round-robin algorithm has the evenest traffic distribution. While least packet superior in CPU utilization with 76% reduction. So algorithm with the best response time and throughput is the least connection. The algorithm with the best service hits is round-robin. Large scale implementation is recommended using the least connection algorithm regarding response time and throughput. When emphasizing evenest distribution, use a round-robin algorithm.


Author(s):  
Jameela Abdulla Hassan ◽  
Fahad Al-Dosari

Abstract— Cloud computing is a Participation in the process and storage operations across distant servers that are shared by many organizations and users and thus be transferred from an application to a service. The organization can share data over the Internet and user can pay only for the resources that will be used only. While cloud computing has disadvantages, there are some advantages for cloudlets have over cloud computing which include: lower network latency and users having full ownership of the data shared. When the need of data to be stored in the servers grows quickly, the workload in every resource will grow too. So, we need a load balancing algorithm and the load balancing is important issue in the cloud environment. Load balancing defined as a technique that divides the extra load equally across all the resources to ensure that no one resource overloaded. . So the performance of the cloud can be improved by having an excellent load balancing strategy. For that we will discuss the existing load balancing algorithms in cloud computing and propose algorithm to improve round robin algorithm by CloudAnalyst simulator  based on a factor of  response time and processing time  and the proposed algorithm was found to be best in response time and processing time when we compare it with round robin algorithms.   Index Terms— Cloud Computing, CloudAnalyst, Load Balance, Mobile Cloud Computing, Cloudlet Networks.


2020 ◽  
Vol 10 (4) ◽  
pp. 173-178
Author(s):  
Alfian Nurdiansyah ◽  
Nugroho Suharto ◽  
Hudiono Hudiono

Server merupakan serbuah sistem yang memberikan layanan tertentu pada suatu jaringan komputer. Server mempunyai sistem operasi sendiri yang disebut sistem operasi jaringan. Server juga mengontrol semua akses terhadap jaringan yang ada didalamnya.  Agar membantu tugas server, dibuatlah sistem mirroring server dimana server tersebut menduplikasi sebuah data set atau tiruan persis dari sebuah server yang menyediakan berbagai informasi. Mirror server atau disebut juga sinkronisasi server merupakan duplikat dari suatu server. Untuk menambah kinerja dari server maka dibutuhkan load balancer. Load balancing adalah teknik untuk mendistribusikan internet dua jalur koneksi secara seimbang. Dengan penerapan load balancing trafik akan berjalan lebih optimal, memaksimalkan throughput dan menghindari overload pada jalur koneksi. Iptables digunakan untuk memfilter IP sehigga client mengakses server sesuai dengan zona server yang paling dekat. Sehingga load balance yang dipadukan dengan iptables dapat membuat kinerja server menjadi lebih ringan. Masalah yang sering terjadi adalah ketika banyaknya client yang mengakses sebuah server maka server akan overload dan mengakibatkan kinerja server menjadi berat karena padatnya trafik. Client yang mengakses juga mendapatkan efek dari hal tersebut yaitu akses yang lama. Dari hasil penelitian tentang perpaduan antara load balance dan iptables didapati bahwa load balance dengan algoritma round robin rata-rata delay yang didapatkan untuk server1 yaitu 0,149 detik dan 0,19122. Server2 rata-rata delay yang didapatkan 0,161 detik dan 0,012 detik.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1091 ◽  
Author(s):  
Thabo Semong ◽  
Thabiso Maupong ◽  
Stephen Anokye ◽  
Kefalotse Kehulakae ◽  
Setso Dimakatso ◽  
...  

In the current technology driven era, the use of devices that connect to the internet has increased significantly. Consequently, there has been a significant increase in internet traffic. Some of the challenges that arise from the increased traffic include, but are not limited to, multiple clients on a single server (which can result in denial of service (DoS)), difficulty in network scalability, and poor service availability. One of the solutions proposed in literature, to mitigate these, is the use of multiple servers with a load balancer. Despite their common use, load balancers, have shown to have some disadvantages, like being vendor specific and non-programmable. To address these disadvantages and improve internet traffic, there has been a paradigm shift which resulted in the introduction of software defined networking (SDN). SDN allows for load balancers that are programmable and provides the flexibility for one to design and implement own load balancing strategies. In this survey, we highlight the key elements of SDN and OpenFlow technology and their effect on load balancing. We provide an overview of the various load balancing schemes in SDN. The overview is based on research challenges, existing solutions, and we give possible future research directions. A summary of emulators/mathematical tools commonly used in the design of intelligent load balancing SDN algorithms is provided. Finally, we outline the performance metrics used to evaluate the algorithms.


The process of analyzing big data and other valuable information is a significant process in the cloud. Since big data processing utilizes a large number of resources for completing certain tasks. Therefore, the incoming tasks are allocated with better utilization of resources to minimize the workload across the server in the cloud. The conventional load balancing technique failed to balance the load effectively among data centers and dynamic QoS requirements of big data application. In order to improve the load balancing with maximum throughput and minimum makespan, Support Vector Regression based MapReduce Throttled Load Balancing (SVR-MTLB) technique is introduced. Initially, a large number of cloud user requests (data/file) are sent to the cloud server from different locations. After collecting the cloud user request, the SVR-MTLB technique balances the workload of the virtual machine with the help of support vector regression. The load balancer uses the index table for maintaining the virtual machines. Then, map function performs the regression analysis using optimal hyperplane and provides three resource status of the virtual machine namely overloaded, less loaded and balanced load. After finding the less loaded VM, the load balancer sends the ID of the virtual machine to the data center controller. The controller performs migration of the task from an overloaded VM to a less loaded VM at run time. This in turn assists to minimize the response time. Experimental evaluation is carried out on the factors such as throughput, makespan, migration time and response time with respect to a number of tasks. The experimental results reported that the proposed SVR-MTLB technique obtains high throughput with minimum response time, makespan as well as migration time than the state -of -the -art methods.


2019 ◽  
Vol 16 (4) ◽  
pp. 627-637
Author(s):  
Sanaz Hosseinzadeh Sabeti ◽  
Maryam Mollabgher

Goal: Load balancing policies often map workloads on virtual machines, and are being sought to achieve their goals by creating an almost equal level of workload on any virtual machine. In this research, a hybrid load balancing algorithm is proposed with the aim of reducing response time and processing time. Design / Methodology / Approach: The proposed algorithm performs load balancing using a table including the status indicators of virtual machines and the task list allocated to each virtual machine. The evaluation results of response time and processing time in data centers from four algorithms, ESCE, Throttled, Round Robin and the proposed algorithm is done. Results: The overall response time and data processing time in the proposed algorithm data center are shorter than other algorithms and improve the response time and data processing time in the data center. The results of the overall response time for all algorithms show that the response time of the proposed algorithm is 12.28%, compared to the Round Robin algorithm, 9.1% compared to the Throttled algorithm, and 4.86% of the ESCE algorithm. Limitations of the investigation: Due to time and technical limitations, load balancing has not been achieved with more goals, such as lowering costs and increasing productivity. Practical implications: The implementation of a hybrid load factor policy can improve the response time and processing time. The use of load balancing will cause the traffic load between virtual machines to be properly distributed and prevent bottlenecks. This will be effective in increasing customer responsiveness. And finally, improving response time increases the satisfaction of cloud users and increases the productivity of computing resources. Originality/Value: This research can be effective in optimizing the existing algorithms and will take a step towards further research in this regard.


Author(s):  
ZAINAL ABIDIN ◽  
Tutuk Indriyani ◽  
Danang Haryo Sulaksono

Client’s request for traffic problems is so huge that causes the single server difficult in handling the traffic load. Therefore, the system of load balancing is required as it is a technique to equally distribute the traffic load on the two or more connection lines so that the traffic can run optimally. Thus, the load balancing is crucial to implement by using Modified Weighted Round Robin-Retrieve Packet on the Software-Defined Networking. Based on the parameter of average response-time in time limits 0.1, 0.2, and 0.3 seconds, the scores were 0.016-0.04, 0.02-0.04, and 0.014-0.032 seconds consecutively. Based on the parameter of data transaction per second in time limits 0.1; 0.2, and 0.3 seconds, the scores respectively were 49.614-111.306, 41.678-107.032, and 37.806-102.84 data transaction/second. 


Sign in / Sign up

Export Citation Format

Share Document