scholarly journals Analisis Perbandingan Kinerja Web Server Apache dan Nginx Menggunakan Httperf Pada Portal Berita (Studi Kasus beritalinux.com)

Author(s):  
Intan Ferina Irza ◽  
Zulhendra Zulhendra ◽  
Efrizon Efrizon

The internet world in the globalization era is now developing. Anyone and anywhere can access the internet if you already have the tools and connections are adequate. There are two-ways relationship in accessing a web, they are Client and Server. Good Web Server performance can affect the quality of two-ways relationship between Client and Server. There are two Web Servers that are widely used today are Apache and Nginx. As a media content provider is expected to meet all the needs of users, especially in terms of performance of the device itself. To prove how the apache and nginx Web Server performance compare to the data request by the user, it is necessary to do a test and compare the parameters of each Web Server. Based on the problems above, the authors want to analyze and compare the performance of both Web servers are Apache and Nginx, so users can choose the best Web Server. The author only compares the parameters of throughput, connection, request, reply and error by assigning load to each test and performed on attributes that exist on beritalinux.com virtually. After testing, the results obtained where in responding and connecting data that requested by the client of web application server nginx was superior to apache. From these results, it is recommended to admin on beritalinux.com to use nginx web server for better website performance.Keywords: Analysis, Performance, Web Server, Apache, Nginx, HTTPERF

Author(s):  
Ibrahim Mahmood Ibrahim ◽  
Siddeeq Y. Ameen ◽  
Hajar Maseeh Yasin ◽  
Naaman Omar ◽  
Shakir Fattah Kak ◽  
...  

Today, web services rapidly increased and are accessed by many users, leading to massive traffic on the Internet. Hence, the web server suffers from this problem, and it becomes challenging to manage the total traffic with growing users. It will be overloaded and show response time and bottleneck, so this massive traffic must be shared among several servers. Therefore, the load balancing technologies and server clusters are potent methods for dealing with server bottlenecks. Load balancing techniques distribute the load among servers in the cluster so that it balances all web servers. The motivation of this paper is to give an overview of the several load balancing techniques used to enhance the efficiency of web servers in terms of response time, throughput, and resource utilization. Different algorithms are addressed by researchers and get good results like the pending job, and IP hash algorithms achieve better performance.


First Monday ◽  
1997 ◽  
Author(s):  
Jussara M. Almeida ◽  
Virgilio Almeida ◽  
David J. Yates

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes WebMonitor, a tool for evaluating and understanding server performance, and presents new results for realistic workloads. WebMonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. WebMonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of WebMonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workloads use file size distributions with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, up to 90% of the time spent handling HTTP requests is spent in the kernel. These results emphasize the important role of operating system implementation in determining Web server performance. It also suggests the need for new operating system implementations that are designed to perform well when running on Web servers.


Organizational web servers reflect the public image of an organization and serve web pages/information to organizational clients via web browsers using HTTP protocol. Some of the web server software may contain web applications that enable users to perform high-level tasks, such as querying a database and delivering the output through the web server to the client browser as an HTML file. Hackers always try to exploit the different vulnerabilities or flaws existing in web servers and web applications, which can pose a big threat for an organization. This chapter provides the importance of protecting web servers and applications along with the different tools used for analyzing the security of web servers and web applications. The chapter also introduces different web attacks that are carried out by an attacker either to gain illegal access to the web server data or reduce the availability of web services. The web server attacks includes denial of service (DOS) attacks, buffer overflow exploits, website defacement with sql injection (SQLi) attacks, cross site scripting (XSS) attacks, remote file inclusion (RFI) attacks, directory traversal attacks, phishing attacks, brute force attacks, source code disclosure attacks, session hijacking, parameter form tampering, man-in-the-middle (MITM) attacks, HTTP response splitting attacks, cross-site request forgery (XSRF), lightweight directory access protocol (LDAP) attacks, and hidden field manipulation attacks. The chapter explains different web server and web application testing tools and vulnerability scanners including Nikto, BurpSuite, Paros, IBM AppScan, Fortify, Accunetix, and ZAP. Finally, the chapter also discusses countermeasures to be implemented while designing any web application for any organization in order to reduce the risk.


2019 ◽  
Vol 16 (1) ◽  
pp. 41-47
Author(s):  
Jeferson Eleazar Martínez-Lozano ◽  
Pedro Sandino Atencio-Ortiz

This article illustrates by means of a demonstration and taking advantage of the vulnerability “Open redirect”, how easy it can be to attack web servers through distributed attacks of denial of services. In it, the Cyber Kill Chain® model is used to carry out this attack in phases. In the development of the research, a systematic UFONet tool is applied and the results obtained are analyzed and it is recommended to protect the Internet application services of said attacks through web application firewalls (WAF) whose presence allows the DDoS traffic of the application layer (including the HTTP-GET flood) arrives effortlessly at the destination server.


2021 ◽  
Vol 5 (3) ◽  
pp. 327
Author(s):  
Agus Tedyyana ◽  
Osman Ghazali

Web servers and web-based applications are now widely used, but in this case, the crime rate in cyberspace has also increased. Crime in cyberspace can occur due to the exploitation of how a system works. For example, the way HTTP works are exploited to weaken the webserver. Various tools for attacking the internet are also starting to be easy to find, but so are the tools to detect these attacks. One of the useful tools for detecting attacks and sending warnings against threats is based on the weblogs on the webserver. Many have not reviewed Teler as an intrusion detection system on HTTP on web servers because the existing tools are relatively new. Teler detecting the weblog and run on the terminal with rule resources collected from the community. So here, the researcher tries to implement the use of Teler in detecting HTTP intrusions on a Nginx-based web server. Intrusion is carried out in attacks commonly used by attackers, for example, port scanning and directory brute force using the Nmap and OWASP ZAP tools. Then the detection results will be sent via the Telegram bot to the server admin. From the results of the experiments conducted, it has been found that Teler is still classified as being able to send warning notifications with a delay between the time of detection and the time when the alert is received, no more than 3 seconds.


Author(s):  
Apurva Solanki ◽  
Aryan Parekh ◽  
Gaurav Chawda ◽  
Mrs. Geetha S.

Day by day, the number of users are increasing on the internet and the web servers need to cater to the requests constantly, also if compared to the past years this year, due to a global pandemic and lockdown in various countries, the requests on the web have surged exponentially. The complexity of configuring a web server is also increasing as the development continues. In this paper, we propose a Lightron web server, which is highly scalable and can cater many requests at a time. Additionally, to ease users from the configuration of the web server we introduced Graphical User Interface which is beginner friendly.


Author(s):  
Maria Gribanova-Podkina

The purpose of the study is to demonstrate the diversity of solutions on the issue of connecting to a database, including a description of the developed connection controller class, as well as various ways to create connection pools on a web server and application servers. The article discusses the practical issues of using JDBC technology when building a Java web application. In the examples, the presentation and business layer of the application are developed using JSP-pages and servlets, the database operates on the MySQL platform. The described methods for creating and configuring a connection pool are shown on the example of the ApacheTomcat web server and the GlassFish application server. The question of optimizing database connections in Java applications remains open, despite the diversity of solutions. The study examines and proposes methods for constructing classes of connectors, various methods for creating pool connections, and describes the results of solving problems arising from the implementation of the described techniques. A detailed classification of ways to connect to the database is given.


2020 ◽  
Vol 9 (12) ◽  
pp. 133-138
Author(s):  
Nnodi Joy Tochukwu ◽  
Obasi Emmanuela Chinonye Mary

Computer system performance can be measured using the amount of useful work accomplished by that system when compared to the time and resources utilized. Useful work here means how well the computer is doing the work it is supposed to do. Most websites work under the support of web servers that usually include hardware (for instance CPU, RAM, Disk, and network) as well as software (web services). The alarming growth in web traffic has led to performance problems and has necessitated much research activities as ways to improve web server performance. In this paper, a web server performance evaluation system that evaluates a server based on time of file execution and bandwidth has been designed and developed. Access log data sets from University of Port Harcourt web site were used to study the system and develop software for checking time of server activity in any domain where the application is executed. The evaluation system so designed and developed can be deployed in any server and the values generated can be used in making useful decisions that will mitigate certain occurrences in the future and also offers site owners information that will be useful in forecasting users visiting behaviours and the effects on the server downtime.


2013 ◽  
Vol 1 (2) ◽  
pp. 28
Author(s):  
Dite Ardian ◽  
Adian Fatchur Rochim ◽  
Eko Didik Widianto

The development of internet technology has many organizations that expanded service website. Initially used single web server that is accessible to everyone through the Internet, but when the number of users that access the web server is very much the traffic load to the web server and the web server anyway. It is necessary for the optimization of web servers to cope with the overload received by the web server when traffic is high. Methodology of this final project research include the study of literature, system design, and testing of the system. Methods from the literature reference books related as well as from several sources the internet. The design of this thesis uses Haproxy and Pound Links as a load balancing web server. The end of this reaserch is testing the network system, where the system will be tested this stage so as to create a web server system that is reliable and safe. The result is a web server system that can be accessed by many user simultaneously rapidly as load balancing Haproxy and Pound Links system which is set up as front-end web server performance so as to create a web server that has performance and high availability.


Author(s):  
Mrunalsinh Chawda ◽  
Dr. Priyanka Sharma ◽  
Mr. Jatin Patel

In Modern Web application directory traversal vulnerability that can potentially allow an attacker to view arbitrary files and some sensitive files. They can exploit identified vulnerabilities or misconfigurations to obtain root privileges. When building the web application, ensure that some arbitrary file is not publicly available via the production server. when an attacker can include. Traversal vulnerabilities this vulnerability exploits the dynamic file include a mechanism that exists in programming frameworks a local file inclusion happens when uncontrolled user input such as form values or headers for example are used to construct a file include paths. By exploiting directory traversal attacks in web servers, they can do anything and with chaining with code injection they can upload a shell into a web server and perform a website defacement attack. Path-traversal attacks take advantage of vulnerable Website parameters by including a URL reference to remotely hosted malicious code, allowing remote code execution and leads to privilege escalation attack.


Sign in / Sign up

Export Citation Format

Share Document