Web Traffic Modeling for E-Commerce Web Server System

Author(s):  
Leszek Borzemski ◽  
Grażyna Suchacka
2011 ◽  
Vol 143-144 ◽  
pp. 346-349
Author(s):  
Li Na Zhang ◽  
Xue Si Ma

With the explosive use of internet, contemporary web servers are susceptible to overloads during which their services deteriorate drastically and often lead to denial of services. Many companies are trying to address this problem using multiple web servers with a front-end load balancer. Load balancing has been found to provide an effective and scalable way of managing the ever-increasing web traffic. Load balancing is one of the central problems that have to be solved in parallel queueing web server system. To analyze load balancing this paper presents a queueing system that has two web servers. Firstly the centralized load balancing system is considered. The next, two routing policies are studied, the average response time and the rejection rate are derived. Finally some of our results are further considered.


1999 ◽  
Vol 27 (3) ◽  
pp. 24-27 ◽  
Author(s):  
Mark S. Squillante ◽  
David D. Yao ◽  
Li Zhang

2019 ◽  
Vol 2 (3) ◽  
pp. 266
Author(s):  
Nongki Angsar

The increase in web traffic and the development of network bandwidth that is relatively faster than the development of microprocessor technology today causes the one point server platform to be no longer sufficient to meet the scalability requirements of web server systems. Multiple server platforms are the answer. One known solution is cluster-based web server systems. In this study, a cluster-based web server system would be designed with the Never Queue algorithm and continued with testing the distribution of web workload on this system. The tests were carried out by generating HTTP workloads statically (with fast HTTP requests per fixed second) and dynamically (rapid HTTP requests per second that change or rise regularly) from the client to the web server system pool. Followed by analyzing data package traffic. In this study, the results of static testing with rapid HTTP requests per second which still showed that the Never Queue algorithm distributed HTTP requests to the web server system pool properly and got HTTP replies that tend to be stable at the HTTP average of 1031.8 replies/s. As for the rapid parameters of TCP connections, response times and errors increased with the rapid increasing HTTP requests generated. The average output was at 2,983 Mbps.


2018 ◽  
Vol 24 (3) ◽  
pp. 2118-2121
Author(s):  
Sungwook Yoon ◽  
Andrew G Kim ◽  
Hyenki Kim
Keyword(s):  

2019 ◽  
Vol 8 (1) ◽  
pp. 1-5
Author(s):  
Marvin Chandra Wijaya

The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.


Sign in / Sign up

Export Citation Format

Share Document