scholarly journals Impact analysis of SYN flood DDOS attack on HAPROXY and NLB cluster-base web servers

Author(s):  
Subhi Rafeeq Zeebaree ◽  
Karwan Fahmi Jacksi ◽  
Rizgar Ramadhan Zebari

<p>In recent, the high available internet service is main demand of the most people. However, online services occasionally become inaccessible due to various threats and attacks. Synchronization (SYN) flood Distributed Denial of Service (DDoS) is the most used and has a serious effect on the public network services. Hence, the outcome of this attack on the commonly utilized cluster-based web servers is systematically illustrated in this paper. Moreover, performance of Internet Information Service 10.0 (IIS 10.0) on Windows server 2016 and Apache 2 on Linux Ubuntu 16.04 server is evaluated efficiently. The performance measuring process is done on both Network Load Balancing (NLB) and High Available Proxy (HAProxy) in Windows and Linux environments respectively as methods for web server load balancing.  Furthermore, stability, efficiency and responsiveness of the web servers are depended as the study evaluation metrics. Additionally, average CPU usage and throughput of the both mechanisms are measured in the proposed system. The results show that the IIS 10.0 cluster-based web servers are more responsiveness, efficiency and stable with and without SYN flood DDoS attack. Also, the performance of IIS 10.0 web server is better than of the Apache 2 in term of the average CPU usage and throughput.</p>

Author(s):  
Ibrahim Mahmood Ibrahim ◽  
Siddeeq Y. Ameen ◽  
Hajar Maseeh Yasin ◽  
Naaman Omar ◽  
Shakir Fattah Kak ◽  
...  

Today, web services rapidly increased and are accessed by many users, leading to massive traffic on the Internet. Hence, the web server suffers from this problem, and it becomes challenging to manage the total traffic with growing users. It will be overloaded and show response time and bottleneck, so this massive traffic must be shared among several servers. Therefore, the load balancing technologies and server clusters are potent methods for dealing with server bottlenecks. Load balancing techniques distribute the load among servers in the cluster so that it balances all web servers. The motivation of this paper is to give an overview of the several load balancing techniques used to enhance the efficiency of web servers in terms of response time, throughput, and resource utilization. Different algorithms are addressed by researchers and get good results like the pending job, and IP hash algorithms achieve better performance.


Author(s):  
Kadiyala Ramana ◽  
M. Ponnavaikko

With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.


Author(s):  
Hasta Triangga ◽  
Ilham Faisal ◽  
Imran Lubis

In IT networking, load balancing used to share the traffic between backend servers. The idea is to make effective and efficient load sharing. Load balancing uses scheduling algorithms in the process includes Static round-robin and Least-connection algorithm. Haproxy is a load balancer that can be used to perform the load balancing technique and run by Linux operating systems. In this research, Haproxy uses 4 Nginx web server as backend servers. Haproxy act as a reverse proxy which accessed by the client while the backend servers handle HTTP requests. The experiment involves 20 Client PCs that are used to perform HTTP requests simultaneously, using the Static round-robin algorithm and Least-connection on the haproxy load balancer alternately. When using Static round-robin algorithm, the results obtained average percentages of CPU usage successively for 1 minute; 5 minutes; and 15 minutes are; 0.1%; 0.25%; and 1.15% with average throughput produced is 14.74 kbps. Average total delay produced 64.3 kbps. The average total delay and jitter is 181.3 ms and 11.1 ms, respectively. As for the Least-connection algorithm average percentage obtained successively for 1 minute; 5 minutes; and 15 minutes are 0.1%; 0.3%; and 1.25% with the average throughput produced is 14.66 kbps. The average total delay and jitter is 350.3 ms and 24.5 ms, respectively. It means Static round-robin algorithm is more efficient than the algorithms Least-connection because it can produce a greater throughput with less CPU load and less total delay.


Author(s):  
Kannan Balasubramanian

Most merchant Web servers are contacted by completely unknown, often even anonymous, users. Thus they cannot generally protect themselves by demanding client authentication, but rather by employing carefully configured access control mechanisms. These range from firewall mechanisms and operating system security to secured execution environments for mobile code. Generally, all types of mechanisms that allow a client to execute a command on the server should be either completely disabled or provided only to a limited extent. Denial-of-service attacks on Web servers have much more serious consequences for Web servers than for Web clients because for servers, losing availability means losing revenue. Web publishing issues include anonymous publishing and copyright protection. Web servers must take special care to protect their most valuable asset. Information. which is usually stored in databases and in some cases requires copyright protection.


2018 ◽  
Vol 7 (2.14) ◽  
pp. 5
Author(s):  
M A Mohamed ◽  
N Jamil ◽  
A F Abidin ◽  
M M Din ◽  
W N S W Nik ◽  
...  

In a perfect condition, there are only normal network traffic and sometimes flash event traffics due to some eye-catching or heart-breaking events. Nevertheless, both events carry legitimate requests and contents to the server. Flash event traffic can be massive and damaging to the availability of the server. However,  it can easily be remedied by hardware solutions such as adding extra processing power and memory devices and software solution such as load balancing. In contrast, a collection of illegal traffic requests produced during distributed denial of service (DDoS) attack tries to cause damage to the server and thus is considered as dangerous where prevention, detection and reaction are imminent in case of occurrence. In this paper, the detection of attacks by distinguishing it from legal traffic is of our main concern. Initially, we categorize the parameters involved in the attacks in relation to their entities. Further, we examine different concepts and techniques from information theory and image processing domain that takes the aforementioned parameters as input and in turn decides whether an attack has occurred. In addition to that, we also pointed out the advantages for each technique, as well as any possible weakness for possible future works. 


Respati ◽  
2020 ◽  
Vol 15 (2) ◽  
pp. 6
Author(s):  
Lukman Lukman ◽  
Melati Suci

INTISARIKeamanan jaringan pada web server merupakan bagian yang paling penting untuk menjamin integritas dan layanan bagi pengguna. Web server sering kali menjadi target serangan yang mengakibatkan kerusakan data. Salah satunya serangan SYN Flood merupakan jenis serangan Denial of Service (DOS) yang memberikan permintaan SYN secara besar-besaran kepada web server.Untuk memperkuat keamanan jaringan web server penerapan Intrusion Detection System (IDS) digunakan untuk mendeteksi serangan, memantau dan menganalisa serangan pada web server. Software IDS yang sering digunakan yaitu IDS Snort dan IDS Suricata yang memiliki kelebihan dan kekurangannya masing-masing. Tujuan penelitian kali ini untuk membandingkan kedua IDS menggunakan sistem operasi linux dengan pengujian serangan menggunakan SYN Flood yang akan menyerang web server kemudian IDS Snort dan Suricata yang telah terpasang pada web server akan memberikan peringatan jika terjadi serangan. Dalam menentukan hasil perbandingan, digunakan parameter-parameter yang akan menjadi acuan yaitu jumlah serangan yang terdeteksi dan efektivitas deteksi serangan dari kedua IDS tersebut.Kata kunci: Keamanan jaringan, Web Server, IDS, SYN Flood, Snort, Suricata. ABSTRACTNetwork security on the web server is the most important part to guarantee the integrity and service for users. Web servers are often the target of attacks that result in data damage. One of them is the SYN Flood attack which is a type of Denial of Service (DOS) attack that gives a massive SYN request to the web server.To strengthen web server network security, the application of Intrusion Detection System (IDS) is used to detect attacks, monitor and analyze attacks on web servers. IDS software that is often used is IDS Snort and IDS Suricata which have their respective advantages and disadvantages.The purpose of this study is to compare the two IDS using the Linux operating system with testing the attack using SYN Flood which will attack the web server then IDS Snort and Suricata that have been installed on the web server will give a warning if an attack occurs. In determining the results of the comparison, the parameters used will be the reference, namely the number of attacks detected and the effectiveness of attack detection from the two IDS.Keywords: Network Security, Web Server, IDS, SYN Flood, Snort, Suricata.


2014 ◽  
Vol 3 (4) ◽  
pp. 1-16 ◽  
Author(s):  
Harikesh Singh ◽  
Shishir Kumar

Load balancing applications introduce delays due to load relocation among various web servers and depend upon the design of balancing algorithms and resources required to share in the large and wide applications. The performance of web servers depends upon the efficient sharing of the resources and it can be evaluated by the overall task completion time of the tasks based on the load balancing algorithm. Each load balancing algorithm introduces delay in the task allocation among the web servers, but still improved the performance of web servers dynamically. As a result, the queue-length of web server and average waiting time of tasks decreases with load balancing instants based on zero, deterministic, and random types of delay. In this paper, the effects of delay due to load balancing have been analyzed based on the factors: average queue-length and average waiting time of tasks. In the proposed Ratio Factor Based Delay Model (RFBDM), the above factors are minimized and improved the functioning of the web server system based on the average task completion time of each web server node. Based on the ratio of average task completion time, the average queue-length and average waiting time of the tasks allocated to the web server have been analyzed and simulated with Monte-Carlo simulation. The results of simulation have shown that the effects of delays in terms of average queue-length and average waiting time using proposed model have minimized in comparison to existing delay models of the web servers.


2011 ◽  
Vol 143-144 ◽  
pp. 346-349
Author(s):  
Li Na Zhang ◽  
Xue Si Ma

With the explosive use of internet, contemporary web servers are susceptible to overloads during which their services deteriorate drastically and often lead to denial of services. Many companies are trying to address this problem using multiple web servers with a front-end load balancer. Load balancing has been found to provide an effective and scalable way of managing the ever-increasing web traffic. Load balancing is one of the central problems that have to be solved in parallel queueing web server system. To analyze load balancing this paper presents a queueing system that has two web servers. Firstly the centralized load balancing system is considered. The next, two routing policies are studied, the average response time and the rejection rate are derived. Finally some of our results are further considered.


2019 ◽  
Vol 6 (2) ◽  
pp. 211
Author(s):  
Dodon Turianto Nugrahadi ◽  
Rudy Herteno ◽  
Muhammad Anshari

<div class="WordSection1"><p><em>The rapid development of technology, the increase in web-based systems and development of microcontroller device, have an impact on the ability of web servers to respond in serving client requests. This study aims to analyze load balancing methods </em><em>algoritma round robin </em><em>and </em><em>tuning that significant influence for</em><em> the response time and the number of clients who are able to be handled in serving clients on the web server with microcontroller device. From this study with Stresstool testing the response time was 2064, 2331,4 and 1869,2ms for not using load balancing and 2270, 2306,2 and 2202ms with load balancing from 700 requests served by web servers. It can be concluded that web server response times that use load balancing are smaller than web servers without load balancing. Furthermore, using tuning with the response time obtained at 3103.4ms from 1100 requests. So, with tuning can reduce response time and increase the number of requests.</em><em> With level significant level calculatio, have it khown that tuning configuration give significant effe</em><em>ct</em><em>  for </em><em>the response time and the number of clients in microcontroller.</em></p><p><strong>Keywords</strong>: <em>Web server, Raspberry, Load balancing, Response time, Stresstool.</em></p><p><em>Perkembangan </em><em>implementasi teknologi</em><em> yang pesat, seiring dengan perkembangannya </em><em>sistem </em><em>berbasis web dan perangkat mikrokontroler</em><em>, berdampak pada kemampuan web server dalam memberikan tanggap untuk melayani permintaan klien. Penelitian ini bertujuan untuk menganalisis metode load balance</em><em> </em><em>algoritma round robin dan tuning yang berpengaruh terhadap waktu tanggap dan banyaknya jumlah klien yang mampu ditangani dalam melayani klien pada web server</em><em> dengan mikrokontroler</em><em>. Dari penelitian ini </em><em>dengan pengujian Stresstool </em><em>didapatkan waktu tanggap sebesar </em>2331,4, 2064, 1869,2<em>ms untuk tanpa load balancing dan </em>2270, 2306,2<em> dan </em>2202<em>ms dengan load balancing dari </em><em>600 permintaan </em><em>yang dilayani </em><em>web server</em><em>. Dapat disimpulkan bahwa waktu tanggap web server yang menggunakan load balancing lebih kecil dibandingkan web server yang tidak menggunakan load balancing. Selanjutnya menggunakan tuning dengan waktu tanggap sebesar 3103,4ms dari 1100 permintaan. Jadi, tuning dapat mempersingkat waktu tanggap dan meningkatkan jumlah permintaan yang dilayani web server.</em><em> Selanjutnya dengan penghitungan tingkat pengaruh, bahwa </em><em>diketahui konfigurasi load balancing algoritma round robin  dan tuning memberikan pengaruh secara signifikan terhadap waktu tanggap dan jumlah permintaan pada mikrokontroler.</em></p><p><strong><em>Kata kunci</em></strong><em> : </em><em>Web server</em><em>,</em><em> Raspberry,</em><em> Load balancing, Waktu tanggap, Stresstool,</em><em> Jmeter,</em><em> Klien</em></p></div><em><br clear="all" /> </em>


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3820
Author(s):  
Abdul Ghafar Jaafar ◽  
Saiful Adli Ismail ◽  
Mohd Shahidan Abdullah ◽  
Nazri Kama ◽  
Azri Azmi ◽  
...  

Application Layer Distributed Denial of Service (DDoS) attacks are very challenging to detect. The shortfall at the application layer allows formation of HTTP DDoS as the request headers are not compulsory to be attached in an HTTP request. Furthermore, the header is editable, thus providing an attacker with the advantage to execute HTTP DDoS as it contains almost similar request header that can emulate a genuine client request. To the best of the authors’ knowledge, there are no recent studies that provide forged request headers pattern with the execution of the current HTTP DDoS attack scripts. Besides that, the current dataset for HTTP DDoS is not publicly available which leads to complexity for researchers to disclose false headers, causing them to rely on old dataset rather than more current attack patterns. Hence, this study conducted an analysis to disclose forged request headers patterns created by HTTP DDoS. The results of this study successfully disclose eight forged request headers patterns constituted by HTTP DDoS. The analysis was executed by using actual machines and eight real attack scripts which are capable of overwhelming a web server in a minimal duration. The request headers patterns were explained supported by a critical analysis to provide the outcome of this paper.


Sign in / Sign up

Export Citation Format

Share Document