Analisis Mirror Server menggunakan Load Balancing pada Jaringan Area Lokal

2020 ◽  
Vol 10 (4) ◽  
pp. 173-178
Author(s):  
Alfian Nurdiansyah ◽  
Nugroho Suharto ◽  
Hudiono Hudiono

Server merupakan serbuah sistem yang memberikan layanan tertentu pada suatu jaringan komputer. Server mempunyai sistem operasi sendiri yang disebut sistem operasi jaringan. Server juga mengontrol semua akses terhadap jaringan yang ada didalamnya.  Agar membantu tugas server, dibuatlah sistem mirroring server dimana server tersebut menduplikasi sebuah data set atau tiruan persis dari sebuah server yang menyediakan berbagai informasi. Mirror server atau disebut juga sinkronisasi server merupakan duplikat dari suatu server. Untuk menambah kinerja dari server maka dibutuhkan load balancer. Load balancing adalah teknik untuk mendistribusikan internet dua jalur koneksi secara seimbang. Dengan penerapan load balancing trafik akan berjalan lebih optimal, memaksimalkan throughput dan menghindari overload pada jalur koneksi. Iptables digunakan untuk memfilter IP sehigga client mengakses server sesuai dengan zona server yang paling dekat. Sehingga load balance yang dipadukan dengan iptables dapat membuat kinerja server menjadi lebih ringan. Masalah yang sering terjadi adalah ketika banyaknya client yang mengakses sebuah server maka server akan overload dan mengakibatkan kinerja server menjadi berat karena padatnya trafik. Client yang mengakses juga mendapatkan efek dari hal tersebut yaitu akses yang lama. Dari hasil penelitian tentang perpaduan antara load balance dan iptables didapati bahwa load balance dengan algoritma round robin rata-rata delay yang didapatkan untuk server1 yaitu 0,149 detik dan 0,19122. Server2 rata-rata delay yang didapatkan 0,161 detik dan 0,012 detik.

Author(s):  
Hasta Triangga ◽  
Ilham Faisal ◽  
Imran Lubis

In IT networking, load balancing used to share the traffic between backend servers. The idea is to make effective and efficient load sharing. Load balancing uses scheduling algorithms in the process includes Static round-robin and Least-connection algorithm. Haproxy is a load balancer that can be used to perform the load balancing technique and run by Linux operating systems. In this research, Haproxy uses 4 Nginx web server as backend servers. Haproxy act as a reverse proxy which accessed by the client while the backend servers handle HTTP requests. The experiment involves 20 Client PCs that are used to perform HTTP requests simultaneously, using the Static round-robin algorithm and Least-connection on the haproxy load balancer alternately. When using Static round-robin algorithm, the results obtained average percentages of CPU usage successively for 1 minute; 5 minutes; and 15 minutes are; 0.1%; 0.25%; and 1.15% with average throughput produced is 14.74 kbps. Average total delay produced 64.3 kbps. The average total delay and jitter is 181.3 ms and 11.1 ms, respectively. As for the Least-connection algorithm average percentage obtained successively for 1 minute; 5 minutes; and 15 minutes are 0.1%; 0.3%; and 1.25% with the average throughput produced is 14.66 kbps. The average total delay and jitter is 350.3 ms and 24.5 ms, respectively. It means Static round-robin algorithm is more efficient than the algorithms Least-connection because it can produce a greater throughput with less CPU load and less total delay.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7342
Author(s):  
Bhavya Alankar ◽  
Gaurav Sharma ◽  
Harleen Kaur ◽  
Raul Valverde ◽  
Victor Chang

Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper.


2019 ◽  
Vol 6 (2) ◽  
pp. 211
Author(s):  
Dodon Turianto Nugrahadi ◽  
Rudy Herteno ◽  
Muhammad Anshari

<div class="WordSection1"><p><em>The rapid development of technology, the increase in web-based systems and development of microcontroller device, have an impact on the ability of web servers to respond in serving client requests. This study aims to analyze load balancing methods </em><em>algoritma round robin </em><em>and </em><em>tuning that significant influence for</em><em> the response time and the number of clients who are able to be handled in serving clients on the web server with microcontroller device. From this study with Stresstool testing the response time was 2064, 2331,4 and 1869,2ms for not using load balancing and 2270, 2306,2 and 2202ms with load balancing from 700 requests served by web servers. It can be concluded that web server response times that use load balancing are smaller than web servers without load balancing. Furthermore, using tuning with the response time obtained at 3103.4ms from 1100 requests. So, with tuning can reduce response time and increase the number of requests.</em><em> With level significant level calculatio, have it khown that tuning configuration give significant effe</em><em>ct</em><em>  for </em><em>the response time and the number of clients in microcontroller.</em></p><p><strong>Keywords</strong>: <em>Web server, Raspberry, Load balancing, Response time, Stresstool.</em></p><p><em>Perkembangan </em><em>implementasi teknologi</em><em> yang pesat, seiring dengan perkembangannya </em><em>sistem </em><em>berbasis web dan perangkat mikrokontroler</em><em>, berdampak pada kemampuan web server dalam memberikan tanggap untuk melayani permintaan klien. Penelitian ini bertujuan untuk menganalisis metode load balance</em><em> </em><em>algoritma round robin dan tuning yang berpengaruh terhadap waktu tanggap dan banyaknya jumlah klien yang mampu ditangani dalam melayani klien pada web server</em><em> dengan mikrokontroler</em><em>. Dari penelitian ini </em><em>dengan pengujian Stresstool </em><em>didapatkan waktu tanggap sebesar </em>2331,4, 2064, 1869,2<em>ms untuk tanpa load balancing dan </em>2270, 2306,2<em> dan </em>2202<em>ms dengan load balancing dari </em><em>600 permintaan </em><em>yang dilayani </em><em>web server</em><em>. Dapat disimpulkan bahwa waktu tanggap web server yang menggunakan load balancing lebih kecil dibandingkan web server yang tidak menggunakan load balancing. Selanjutnya menggunakan tuning dengan waktu tanggap sebesar 3103,4ms dari 1100 permintaan. Jadi, tuning dapat mempersingkat waktu tanggap dan meningkatkan jumlah permintaan yang dilayani web server.</em><em> Selanjutnya dengan penghitungan tingkat pengaruh, bahwa </em><em>diketahui konfigurasi load balancing algoritma round robin  dan tuning memberikan pengaruh secara signifikan terhadap waktu tanggap dan jumlah permintaan pada mikrokontroler.</em></p><p><strong><em>Kata kunci</em></strong><em> : </em><em>Web server</em><em>,</em><em> Raspberry,</em><em> Load balancing, Waktu tanggap, Stresstool,</em><em> Jmeter,</em><em> Klien</em></p></div><em><br clear="all" /> </em>


2021 ◽  
Vol 5 (2) ◽  
pp. 226-233
Author(s):  
Anggi Hanafiah

Dalam kehidupan sehari-hari semua orang tidak terlepas dari berbagai macam informasi, terutama informasi yang dihasilkan dari sebuah website. Selain dari pemrograman yang handal, resource yang lain seperti webserver juga sangat perlu diperhatikan agar website dapat berjalan dengan baik. Seiring meningkatnya kebutuhan konten dan pengunjung website, maka website sering mengalami crash atau request yang overload. Hal ini dikarenakan masih menerapkan single server untuk menangani website tersebut. Untuk mengatasi permasalahan tersebut, perlu diterapkan sebuah load balance cluster, dimana beban kerja webserver tersebut dapat didistribusikan ke beberapa node cluster. Algoritma penjadwalan weighted round robin merupakan salah algoritma penjadwalan dimana beban kerja server dapat berjalan seimbang dengan cara memberikan jumlah bobot ke masing-masing node cluster.


2020 ◽  
Vol 10 (2) ◽  
pp. 22-35 ◽  
Author(s):  
Sudipta Sahana ◽  
Tanmoy Mukherjee ◽  
Debabrata Sarddar

Cloud load balancing has become one of the most vital aspects of Cloud computing that has captured the attention of IT organizations and business firms in recent years. Among the issues related to this particular aspect, one such issue which needs to be addressed is the issue of effectively serving the clients' requests among multiple servers using an appropriate load balancer. Previous survey papers discussed various issues of cloud load balancing and accordingly devised various methods and techniques to address those issues with the objectives of reduction of processing time and response time along with optimization of costs. In this article, we have discussed an effective load balancing technique using the weighted Round-Robin algorithm which can process the client requests among multiple servers with minimal response time. Considering all these aspects, a cloud-based dynamic load balancer is being used to solve the problem of load balancing in the cloud infrastructure.


2021 ◽  
Vol 6 (1) ◽  
pp. 103
Author(s):  
Hardiyan Kesuma Ramadhan ◽  
Sukma Wardhana

In the digital era and the outbreak of the COVID-19 pandemic, all activities are online. If the number of users accessing the server exceeds IT infrastructure, server down occurs. A load balancer device is required to share the traffic request load. This study compares four algorithms on Citrix ADC VPX load balancer: round-robin, least connection, least response time and least packet using GNS3. The results of testing response time and throughput parameters show that the least connection algorithm is superior. There were a 33% reduction in response time and a 53% increase in throughput. In the service hits parameter, the round-robin algorithm has the evenest traffic distribution. While least packet superior in CPU utilization with 76% reduction. So algorithm with the best response time and throughput is the least connection. The algorithm with the best service hits is round-robin. Large scale implementation is recommended using the least connection algorithm regarding response time and throughput. When emphasizing evenest distribution, use a round-robin algorithm.


2021 ◽  
Vol 13 (2) ◽  
pp. 54
Author(s):  
Yazhi Liu ◽  
Jiye Zhang ◽  
Wei Li ◽  
Qianqian Wu ◽  
Pengmiao Li

A data center undertakes increasing background services of various applications, and the data flows transmitted between the nodes in data center networks (DCNs) are consequently increased. At the same time, the traffic of each link in a DCN changes dynamically over time. Flow scheduling algorithms can improve the distribution of data flows among the network links so as to improve the balance of link loads in a DCN. However, most current load balancing works achieve flow scheduling decisions to the current links on the basis of past link flow conditions. This situation impedes the existing link scheduling methods from implementing optimal decisions for scheduling data flows among the network links in a DCN. This paper proposes a predictive link load balance routing algorithm for a DCN based on residual networks (ResNet), i.e., the link load balance route (LLBR) algorithm. The LLBR algorithm predicts the occupancy of the network links in the next duty cycle, according to the ResNet architecture, and then the optimal traffic route is selected according to the predictive network environment. The LLBR algorithm, round-robin scheduling (RRS), and weighted round-robin scheduling (WRRS) are used in the same experimental environment. Experimental results show that compared with the WRRS and RRS, the LLBR algorithm can reduce the transmission time by approximately 50%, reduce the packet loss rate from 0.05% to 0.02%, and improve the bandwidth utilization by 30%.


2020 ◽  
Vol 4 (2) ◽  
pp. 85 ◽  
Author(s):  
Taufik Hidayat ◽  
Yasep Azzery ◽  
Rahutomo Mahardiko

The use of load balance on a network will be very much needed if the network is an active network and is widely accessed by users. A reason is that it allows network imbalances to occur. Round Robin (RR) algorithm can be applied for network load balancing because it is a simple algorithm to schedule processes so that it can provide work process efficiency. Authors use the Systematic Literature Review (SLR) method in which it can be applied for criteria selection during papers search to match the title being raised. SLR is divided into five stages, namely formalization of questions, criteria selection, selection of sources, selection of search results, and quality assessment. By using SLR, it is expected that papers according to criteria and quality can be found.


Author(s):  
M. Chaitanya ◽  
K. Durga Charan

Load balancing makes cloud computing greater knowledgeable and could increase client pleasure. At reward cloud computing is among the all most systems which offer garage of expertise in very lowers charge and available all the time over the net. However, it has extra vital hassle like security, load administration and fault tolerance. Load balancing inside the cloud computing surroundings has a large impact at the presentation. The set of regulations relates the sport idea to the load balancing manner to amplify the abilties in the public cloud environment. This textual content pronounces an extended load balance mannequin for the majority cloud concentrated on the cloud segregating proposal with a swap mechanism to select specific strategies for great occasions.


Author(s):  
Subhranshu Sekhar Tripathy ◽  
Diptendu Sinha Roy ◽  
Rabindra K. Barik

Nowadays, cities are intended to change to a smart city. According to recent studies, the use of data from contributors and physical objects in many cities play a key element in the transformation towards a smart city. The ‘smart city’ standard is characterized by omnipresent computing resources for the observing and critical control of such city’s framework, healthcare management, environment, transportation, and utilities. Mist computing is considered a computing prototype that performs IoT applications at the edge of the network. To maintain the Quality of Service (QoS), it is impressive to employ context-aware computing as well as fog computing simultaneously. In this article, the author implements an optimization strategy applying a dynamic resource allocation method based upon genetic algorithm and reinforcement learning in combination with a load balancing procedure. The proposed model comprises four layers i.e. IoT layer, Mist layer, Fog layer, and Cloud layer. Authors have proposed a load balancing technique called M2F balancer which regulates the traffic in the network incessantly, accumulates the information about each server load, transfer the incoming query, and disseminate them among accessible servers equally using dynamic resources allocation method. To validate the efficacy of the proposed algorithm makespan, resource utilization, and the degree of imbalance (DOI) are considered as the scheduling parameter. The proposed method is being compared with the Least count, Round Robin, and Weighted Round Robin. In the end, the results demonstrate that the solutions enhance QoS in the mist assisted cloud environment concerning maximization resource utilization and minimizing the makespan. Therefore, M2FBalancer is an effective method to utilize the resources efficiently by ensuring uninterrupted service. Consequently, it improves performance even at peak times.


Sign in / Sign up

Export Citation Format

Share Document