Trace-Driven Simulation of Document Caching Strategies for Internet Web Servers

SIMULATION ◽  
1997 ◽  
Vol 68 (1) ◽  
pp. 23-33 ◽  
Author(s):  
Martin F. Arlitt ◽  
Carey L. Williamson

Given the continued growth of the World-Wide Web, performance of Web sewers is becoming increasingly important. File caching can be used to reduce the time that it takes a Web server to respond to client requests, by storing the most popular files in the main memory of the Web sewer, and by reducing the volume of data that must be transferred between secondary storage and the Web server. In this paper, we use trace-driven simulation to evaluate the effects of various replacement, threshold, and partitioning policies on the performance of a Web sewer. The workload traces for the simulations come from Web server access logs, from six different Internet Web sewers. The traces represent three different orders of magnitude in sewer activity and two different orders of magnitude in time duration. The results from our simulation study show that frequency-based caching strategies, using a variation of the Least Frequently Used (LFU) replacement policy, perform the best for the Web sewer workload traces considered. Thresholding policies and cache partitioning policies for Internet Web servers do not appear to be effective.

Author(s):  
Ibrahim Mahmood Ibrahim ◽  
Siddeeq Y. Ameen ◽  
Hajar Maseeh Yasin ◽  
Naaman Omar ◽  
Shakir Fattah Kak ◽  
...  

Today, web services rapidly increased and are accessed by many users, leading to massive traffic on the Internet. Hence, the web server suffers from this problem, and it becomes challenging to manage the total traffic with growing users. It will be overloaded and show response time and bottleneck, so this massive traffic must be shared among several servers. Therefore, the load balancing technologies and server clusters are potent methods for dealing with server bottlenecks. Load balancing techniques distribute the load among servers in the cluster so that it balances all web servers. The motivation of this paper is to give an overview of the several load balancing techniques used to enhance the efficiency of web servers in terms of response time, throughput, and resource utilization. Different algorithms are addressed by researchers and get good results like the pending job, and IP hash algorithms achieve better performance.


Organizational web servers reflect the public image of an organization and serve web pages/information to organizational clients via web browsers using HTTP protocol. Some of the web server software may contain web applications that enable users to perform high-level tasks, such as querying a database and delivering the output through the web server to the client browser as an HTML file. Hackers always try to exploit the different vulnerabilities or flaws existing in web servers and web applications, which can pose a big threat for an organization. This chapter provides the importance of protecting web servers and applications along with the different tools used for analyzing the security of web servers and web applications. The chapter also introduces different web attacks that are carried out by an attacker either to gain illegal access to the web server data or reduce the availability of web services. The web server attacks includes denial of service (DOS) attacks, buffer overflow exploits, website defacement with sql injection (SQLi) attacks, cross site scripting (XSS) attacks, remote file inclusion (RFI) attacks, directory traversal attacks, phishing attacks, brute force attacks, source code disclosure attacks, session hijacking, parameter form tampering, man-in-the-middle (MITM) attacks, HTTP response splitting attacks, cross-site request forgery (XSRF), lightweight directory access protocol (LDAP) attacks, and hidden field manipulation attacks. The chapter explains different web server and web application testing tools and vulnerability scanners including Nikto, BurpSuite, Paros, IBM AppScan, Fortify, Accunetix, and ZAP. Finally, the chapter also discusses countermeasures to be implemented while designing any web application for any organization in order to reduce the risk.


Author(s):  
Dilip Singh Sisodia

Web robots are autonomous software agents used for crawling websites in a mechanized way for non-malicious and malicious reasons. With the popularity of Web 2.0 services, web robots are also proliferating and growing in sophistication. The web servers are flooded with access requests from web robots. The web access requests are recorded in the form of web server logs, which contains significant knowledge about web access patterns of visitors. The presence of web robot access requests in log repositories distorts the actual access patterns of human visitors. The human visitors' actual web access patterns are potentially useful for enhancement of services for more satisfaction or optimization of server resources. In this chapter, the correlative access patterns of human visitors and web robots are discussed using the web server access logs of a portal.


Respati ◽  
2020 ◽  
Vol 15 (2) ◽  
pp. 6
Author(s):  
Lukman Lukman ◽  
Melati Suci

INTISARIKeamanan jaringan pada web server merupakan bagian yang paling penting untuk menjamin integritas dan layanan bagi pengguna. Web server sering kali menjadi target serangan yang mengakibatkan kerusakan data. Salah satunya serangan SYN Flood merupakan jenis serangan Denial of Service (DOS) yang memberikan permintaan SYN secara besar-besaran kepada web server.Untuk memperkuat keamanan jaringan web server penerapan Intrusion Detection System (IDS) digunakan untuk mendeteksi serangan, memantau dan menganalisa serangan pada web server. Software IDS yang sering digunakan yaitu IDS Snort dan IDS Suricata yang memiliki kelebihan dan kekurangannya masing-masing. Tujuan penelitian kali ini untuk membandingkan kedua IDS menggunakan sistem operasi linux dengan pengujian serangan menggunakan SYN Flood yang akan menyerang web server kemudian IDS Snort dan Suricata yang telah terpasang pada web server akan memberikan peringatan jika terjadi serangan. Dalam menentukan hasil perbandingan, digunakan parameter-parameter yang akan menjadi acuan yaitu jumlah serangan yang terdeteksi dan efektivitas deteksi serangan dari kedua IDS tersebut.Kata kunci: Keamanan jaringan, Web Server, IDS, SYN Flood, Snort, Suricata. ABSTRACTNetwork security on the web server is the most important part to guarantee the integrity and service for users. Web servers are often the target of attacks that result in data damage. One of them is the SYN Flood attack which is a type of Denial of Service (DOS) attack that gives a massive SYN request to the web server.To strengthen web server network security, the application of Intrusion Detection System (IDS) is used to detect attacks, monitor and analyze attacks on web servers. IDS software that is often used is IDS Snort and IDS Suricata which have their respective advantages and disadvantages.The purpose of this study is to compare the two IDS using the Linux operating system with testing the attack using SYN Flood which will attack the web server then IDS Snort and Suricata that have been installed on the web server will give a warning if an attack occurs. In determining the results of the comparison, the parameters used will be the reference, namely the number of attacks detected and the effectiveness of attack detection from the two IDS.Keywords: Network Security, Web Server, IDS, SYN Flood, Snort, Suricata.


2021 ◽  
Vol 5 (1) ◽  
pp. 132-138
Author(s):  
Hataw Jalal Mohammed ◽  
Kamaran Hama Ali Faraj

The web servers (WSGI-Python) and (PHP-Apache) are in middleware tier architecture. Middleware architecture is between frontend tier and backend tier, otherwise it’s a connection between frontend tier and backend tier for three tier architecture. The ELearning systems are designed by two different dynamic web technologies. First is by Python-WSGI and the second is by Personal Home Page (PHP-Apache). The two websites were designed with different open source and cross platform web technologies programming language namely; Python and PHP in the same structure and weight will evaluate perform over two different operating systems (OSs): 1) Windows-16 and 2) Linux-Ubuntu 20.4. Both systems run over the same computer architecture (64bit) as a server side with a common backend MySQL web database for both of them. Nevertheless, the middleware for PHP is a cross Apache MySQL PHP Perl (XAMPP), but the middleware for Python is Pycharm and the web server gateway interface (WSGI). WSGI and Apache are both web servers and this paper will show which of them has a better response time (RT). On the one hand, the experimental results demonstrate that the Python-WSGI is even weightier in Mbyte than PHP-Apache, on the other hand Python is still faster and more accurate than PHP. The designed SPG is by handwriting codes: one time designed the SPG by PHP source code and the other time designed by Python source code. Both Python-WSGI and PHP-Apache results are targeted to compare by the least time in milliseconds and take in to account enhanced performance.


Author(s):  
Apurva Solanki ◽  
Aryan Parekh ◽  
Gaurav Chawda ◽  
Mrs. Geetha S.

Day by day, the number of users are increasing on the internet and the web servers need to cater to the requests constantly, also if compared to the past years this year, due to a global pandemic and lockdown in various countries, the requests on the web have surged exponentially. The complexity of configuring a web server is also increasing as the development continues. In this paper, we propose a Lightron web server, which is highly scalable and can cater many requests at a time. Additionally, to ease users from the configuration of the web server we introduced Graphical User Interface which is beginner friendly.


2014 ◽  
Vol 3 (4) ◽  
pp. 1-16 ◽  
Author(s):  
Harikesh Singh ◽  
Shishir Kumar

Load balancing applications introduce delays due to load relocation among various web servers and depend upon the design of balancing algorithms and resources required to share in the large and wide applications. The performance of web servers depends upon the efficient sharing of the resources and it can be evaluated by the overall task completion time of the tasks based on the load balancing algorithm. Each load balancing algorithm introduces delay in the task allocation among the web servers, but still improved the performance of web servers dynamically. As a result, the queue-length of web server and average waiting time of tasks decreases with load balancing instants based on zero, deterministic, and random types of delay. In this paper, the effects of delay due to load balancing have been analyzed based on the factors: average queue-length and average waiting time of tasks. In the proposed Ratio Factor Based Delay Model (RFBDM), the above factors are minimized and improved the functioning of the web server system based on the average task completion time of each web server node. Based on the ratio of average task completion time, the average queue-length and average waiting time of the tasks allocated to the web server have been analyzed and simulated with Monte-Carlo simulation. The results of simulation have shown that the effects of delays in terms of average queue-length and average waiting time using proposed model have minimized in comparison to existing delay models of the web servers.


Author(s):  
Mrunalsinh Chawda ◽  
Dr. Priyanka Sharma ◽  
Mr. Jatin Patel

In Modern Web application directory traversal vulnerability that can potentially allow an attacker to view arbitrary files and some sensitive files. They can exploit identified vulnerabilities or misconfigurations to obtain root privileges. When building the web application, ensure that some arbitrary file is not publicly available via the production server. when an attacker can include. Traversal vulnerabilities this vulnerability exploits the dynamic file include a mechanism that exists in programming frameworks a local file inclusion happens when uncontrolled user input such as form values or headers for example are used to construct a file include paths. By exploiting directory traversal attacks in web servers, they can do anything and with chaining with code injection they can upload a shell into a web server and perform a website defacement attack. Path-traversal attacks take advantage of vulnerable Website parameters by including a URL reference to remotely hosted malicious code, allowing remote code execution and leads to privilege escalation attack.


2020 ◽  
Vol 36 (11) ◽  
pp. 3568-3569 ◽  
Author(s):  
Jian-Peng Zhou ◽  
Lei Chen ◽  
Tianyun Wang ◽  
Min Liu

Abstract Motivation Anatomical therapeutic chemical (ATC) classification system is very important for drug utilization and studies. Correct prediction of the 14 classes in the first level for given drugs is an essential problem for the study on such system. Several multi-label classifiers have been proposed in this regard. However, only two of them provided the web servers and their performance was not very high. On the other hand, although some rest classifiers can provide better performance, they were built based on some prior knowledge on drugs, such as information of chemical–chemical interaction and chemical ontology, leading to limited applications. Furthermore, provided codes of these classifiers are almost inaccessible for pharmacologists. Results In this study, we built a simple web server, namely iATC-FRAKEL. This web server only required the SMILES format of drugs as input and extracted their fingerprints for making prediction. The performance of the iATC-FRAKEL was much higher than all existing web servers and was comparable to the best multi-label classifier but had much wider applications. Such web server can be visited at http://cie.shmtu.edu.cn/iatc/index. Availability and implementation The web server is available at http://cie.shmtu.edu.cn/iatc/index. Contact [email protected] Supplementary information Supplementary data are available at Bioinformatics online.


2012 ◽  
Vol 198-199 ◽  
pp. 439-443
Author(s):  
Hua Xia Wang ◽  
Xiu Pin Zeng ◽  
Li Jun Deng ◽  
Yi Na Guo

For low effectiveness and passiveness of current monitoring of domestic web-servers, this paper presents a design of web-server monitoring system based on ACE framework, which combines with government’s website approve , and gives a detailed statement of the design of the network transmission sub-system and the choice of non-blocking I/O model based on ACE framework.


Sign in / Sign up

Export Citation Format

Share Document