Improving Client-Perceived Performance Based on Efficient Distribution of Contents in Content Delivery Network

2019 ◽  
Vol 16 (9) ◽  
pp. 3874-3878
Author(s):  
Meenakshi Gupta ◽  
Atul Garg

Content Delivery Network (CDN) system aims to upgrade the performance of content delivery to end-users of popular websites. It replicates the contents at different geographic locations to serve from the point closer to them. It supports to bring network and as a result web performance to next level. For proper utilization of CDN resources, it is significant to efficiently distribute popular contents over surrogate servers. Most of the CDNs have a large number of surrogate servers. This requires coordination between the surrogate servers to improve overall capability of a CDN system while limiting the cost. This paper suggests a technique to efficiently distribute contents over surrogate servers that cooperate with one another to improve quality of service (QoS) of web content delivery to clients (end-users) in terms of response time.

The online access has been increasing rapidly with the digitization of information, cheaper Internet service and affordable devices to access the Internet. This entails for not only handling increasing number of web requests but also meeting Quality of Service (QoS) requirements of end-users. Content Delivery Network (CDN) system is used to make better the performance of origin server by storing the popular contents on surrogate servers. The contents are disseminated to the web users through surrogate servers. The performance of CDN system relies on the selection of appropriate surrogate server to satisfy end-users’ requests. The proposed method named Load Balancing using Neighbors and Utility Computing (LBNUC) takes into account requests arrival rate, load on surrogate servers, end-users’ changing demand and capacity of surrogate servers. The aim is efficient utilization of CDN resources to minimize the time required to serve end-users requests and the cost of servicing requests. This method is also effective in handling of flash crowd situation by monitoring request rate. It handles this situation with support from neighbor surrogate servers and arranging additional resources, if required, through utility computing to meet QoS requirement of end-users.


Author(s):  
Suman Jayakumar ◽  
Prakash Sheelvanthmath ◽  
Channappa Baslingappa Akki

<p>Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues.</p>


Author(s):  
Meenakshi Gupta ◽  
Atul Garg

Web content delivery is based on client-server model. In this model, all the web requests for specific contents are serviced by a single web server as the requested contents reside only on one server. Therefore, with the increasing reliance on the web, the load on the web servers is increasing, thus causing scalability, reliability and performance issues for the web service providers. Various techniques have been implemented to handle these issues and improve the Quality of Service of the web content delivery to end-users such as clustering of servers, client-side caching, proxy server caching, mirroring of servers, multihoming and Content Delivery Network (CDN). This paper gives an analytical and comparative look on these approaches. It also compares CDN with other distributed systems such as grid, cloud and peer-to-peer computing.


Media streaming has gained popularity due to convenience of playing it at one’s own leisure. It demands for smooth playing of media. However,with the increasing trend of media streaming and number of online users, it is getting difficult for content providers of popular media contents to handle media playing requests for popular media files. The number of simultaneous requests for media contents may affect uniform delivery of media contents and can lead to lower engagement of end-users.Content Delivery Network (CDN) plays an important role in streaming popular media contents by satisfying end-users’ requests through surrogate servers. However, in order to enhance end-users experience, it is not sufficient to only reduce response time of media segments. It also requires to have lesser number of stalls during media streaming. This entails for redirecting requests to suitable surrogate servers as well as managingthetime duration between delivery of subsequent segments of a media file.The proposed methodnamed Stall Aware Media Streaming (SAMS) focuses on enhancing end-users experience by reducing wait time during media streaming.It keeps track of the possibility of stalls during media streaming and adjusts the media segments delivery rate to endusers accordingly. This results in meeting Quality of Service (QoS) requirement of end-users for media streaming in a better way by content providers.


Author(s):  
S. Dhanalakshmi ◽  
T. Prabakaran ◽  
Krishna Kishore

Content Delivery Network is a network of servers hosted by a service provider in multiple locations of the world so that the content could deliver from a server that is nearest to the consumer requesting for it. It has evolved to overcome the inherent limitations of the internet regarding user perceived Quality of Service (QoS) when accessing the Web Content. It has been proposed to maximize bandwidth, improve accessibility and maintain correctness through content replication. The content is distributed to cache servers and located close to the users, resulting in fast, reliable applications and web services for the users. In this paper we provide a components, technologies and comprehensive taxonomy with a broad coverage of CDNs regarding the organizational structure, content distribution mechanisms, request redirection techniques, and performance measurement methodologies.


Author(s):  
Folasade Ayankoya ◽  
Olubukola Ajayi ◽  
Blaise Ohwo

Mobile broadband utilizing Long-Term Evolution (LTE) has advanced the field of data transmission; with networks capable of providing broadband speeds to mobile broadband users. There has been a sporadic increase in the utilization of Long-Term Evolution (LTE) networks, but due to the rapid growth and utilization of network links and network services, certain issues begin to rise, such as the issue of poor Quality of Service (QoS) perceived by mobile users. Data network quality of service degrades over time when network cannot keep up with the growing demand for the network resources. The research reviewed various existing content delivery network models in order to understand the overall architecture and operations. An optimized model was developed and integrated into the existing Long-Term Evolution network models. The model was evaluated using the Network Simulator (NS-3) and Quality of Service (QoS) metrics, such as, Network Throughput, Round Trip Time, Bandwidth, Packet Loss, Jitter and Connection Ratio. The results obtained from the simulations showed that the optimized model performed better and more efficiently than previous solutions. And if implemented in Mobile Broadband, this will improve the Quality of Service, network throughput and overall performance of the network. This study concluded that cloud-based content delivery network provides a solution which would help improve the Quality of Service experience by Mobile Broadband subscribers. By actively redirecting network traffic to the nearest replica server on the network edge; thus, increasing efficiency and throughput.


Author(s):  
Nupur Goyal ◽  
Tanuja Joshi ◽  
Mangey Ram

Content Delivery Networks (CDN) are the backbone of Internet. A lot of research has been done to make CDNs more reliable. Despite that, the world has suffered from CDN inefficiencies quite a few times, not just due to external hacking attempts but due to internal failures as well. In this research work the authors have analyzed the performance of a content delivery network through various reliability measures. Considering a basic CDN workflow they have calculated the reliability and availability of the proposed multi-state system using Markov process and Laplace transformation. Software/Hardware failures in any network component can affect the reliability of the whole system. Therefore, the authors have analyzed the obtained results to find major causes of failures in the system, which when avoided, can lead to a faster and more efficient distribution network.


2021 ◽  
Vol 14 (4) ◽  
pp. 18-32
Author(s):  
S. Sajitha Banu ◽  
S. R. Balasundaram

Cloud computing is a technology to store, process, and manage the data virtually over remote data centers through the internet. Due to the rapid growth of cloud services, the content distribution network broadly uses them to deliver data all over the globe. Due to the rapid generation of the data, delivering on the network is a challenging problem. As the number of replicas increases, the storage cost will be increased. This is a major issue in cloud-based content delivery networks. To overcome this issue, the authors developed a new model for cloud-based CDN with cost optimization algorithm STLM (storage, traffic, latency, cost minimization) to reduce the number of replicas in order to optimize the cost of storage and cost of content delivery. The authors have compared their proposed STLM algorithm with other existing algorithms. They adopt simulation with YouTube e-learning data retrieval. The proposed algorithm is used to place the contents in an efficient way to the geologically dispersed proxy servers in the cloud to encounter quality of service (QoS) and quality of experience (QoE).


Author(s):  
Sujie Shao ◽  
Weichao Gong ◽  
Huifeng Yang ◽  
Shaoyong Guo ◽  
Liandong Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document