Implementation of round robin policy in DNS for thresholding of distributed web server system

Author(s):  
G. M. Borkar ◽  
M. A. Pund ◽  
P. Jawade
Keyword(s):  
SinkrOn ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 147
Author(s):  
Desmulyati Desmulyati ◽  
Muhammad Rizki Perdana Putra

One of the providers of cloud computing services is the Google Cloud Platform developed by Google LLC. PT Lintas Data Indonesia as a vendor and distributor of technology devices is in dire need of a web server for the publicity of its products. At present, PT Lintas Data Indonesia's web server system still uses the services of hostgator hosting providers with packages with limited resources and also cannot implement system load balances and fail over on the current server system, in terms of latency access speed when pinging web servers in HostGator is quite high up to 200Ms. To improve the performance of a web server so that a load balance and fail over system can be implemented, migrating to the Google Cloud Platform environment is a solution that is expected to be able to handle existing problems. The advantages of Google Cloud Platform are servers that are rented for web servers in the form of Virtual Private Servers (VPS) so that they are easy to maintain and if you want to upgrade services. The addition of three web servers in the cluster with HAProxy server makes PT Lintas Data Indonesia's web server service more reliable in handling requests, load balances with round robin methods and fail over web servers and with HAProxy it is proven that it can increase up to 150% in handling latency issues previously it was around 30Ms.


2018 ◽  
Vol 24 (3) ◽  
pp. 2118-2121
Author(s):  
Sungwook Yoon ◽  
Andrew G Kim ◽  
Hyenki Kim
Keyword(s):  

2019 ◽  
Vol 8 (1) ◽  
pp. 1-5
Author(s):  
Marvin Chandra Wijaya

The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.


2021 ◽  
Vol 3 (3) ◽  
pp. 368-375
Author(s):  
Aep Setiawan ◽  
Rifa Ade Rahmah

College of Vocational IPB University (SV-IPB) uses a client-server system as an information technology architecture. The server provides several services that are used to assist the teaching and learning process at the IPB Vocational School. The application used to provide services is the Modular Object-Oriented Dynamic Learning Environment (MOODLE) which is used for e-learning. SV-IPB provides two Virtual Machines which are used as a web server and a database server. The use of a single web server to replace the request is certainly less stable because there is no web server to back it up so that the service will stop. This situation states that the use of a single web server does not have high information (high available). To overcome this problem, the cluster technology can be used to group several web servers in SV-IPB. The web server clustering technology used is the Gluster File System (GlusterFS) with the volume type used, namely Distributed-replicated volume. Based on the tests that have been carried out, this project can solve the problem that has been described earlier that "one web server is down, there is still another web server that can so that the client request process does not stop. In addition, the clustering technology used is required for the use of load balancing web servers so that it can reduce the load on each server because the request process will be sent alternately between web servers.


Author(s):  
Hasta Triangga ◽  
Ilham Faisal ◽  
Imran Lubis

In IT networking, load balancing used to share the traffic between backend servers. The idea is to make effective and efficient load sharing. Load balancing uses scheduling algorithms in the process includes Static round-robin and Least-connection algorithm. Haproxy is a load balancer that can be used to perform the load balancing technique and run by Linux operating systems. In this research, Haproxy uses 4 Nginx web server as backend servers. Haproxy act as a reverse proxy which accessed by the client while the backend servers handle HTTP requests. The experiment involves 20 Client PCs that are used to perform HTTP requests simultaneously, using the Static round-robin algorithm and Least-connection on the haproxy load balancer alternately. When using Static round-robin algorithm, the results obtained average percentages of CPU usage successively for 1 minute; 5 minutes; and 15 minutes are; 0.1%; 0.25%; and 1.15% with average throughput produced is 14.74 kbps. Average total delay produced 64.3 kbps. The average total delay and jitter is 181.3 ms and 11.1 ms, respectively. As for the Least-connection algorithm average percentage obtained successively for 1 minute; 5 minutes; and 15 minutes are 0.1%; 0.3%; and 1.25% with the average throughput produced is 14.66 kbps. The average total delay and jitter is 350.3 ms and 24.5 ms, respectively. It means Static round-robin algorithm is more efficient than the algorithms Least-connection because it can produce a greater throughput with less CPU load and less total delay.


Sign in / Sign up

Export Citation Format

Share Document