origin server
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 2)

H-INDEX

1
(FIVE YEARS 0)

This paper proposes an architecture of content delivery network (CDN) based on big data for power saving. There are two types of video content: hot content and cold content. When video content is accessed frequently, it is called hot content. Conversely, when video content is accessed infrequently, it is called cold content. In CDN, there is an origin server and a CDN cache server. A CDN cache server has a replicated content and provides its content to the end users nearby. Therefore, the user can receive the requested content from the closest proximity for fast content. The proposed architecture in this paper powers off the cold content server in CDN cache server when the number of cold content requests decreases. Hence, the proposed architecture for content delivery services based on power saving is expected to be useful for providing multimedia streaming services with low power consumption for content providers.


Author(s):  
Nay Myo Sandar

Over the last decades, Content Delivery Networks (CDNs) have been developed to overcome the limitation of user perceived latency by replicating contents from origin server to its content servers around the globe close to clients. As some contents occupy most of the storage capacity and processing power in traditional private content servers, cloud computing can provide a pool of storage and processing power resources for caching contents. By adopting cloud computing to CDN, the content provider can use the cloud infrastructure by distributing the contents to cloud servers which will then deliver to near clients. In this paper, we propose a cloud-based CDN framework designed by two schemes 1) UDP/TCP-based content distribution from origin server to cloud servers and 2) SDN-based cloud server coordination. In addition, we also formulate the optimal content placement problem using binary integer programming to minimize the total cost of renting resources including storage, processing power, and network bandwidth in cloud providers for hosting contents from origin server. Then, the optimal solution obtained from binary integer programming is evaluated by greedy algorithm and simulations. The proposed framework helps content provider to offer high quality of services to clients while minimizing the cost of rented cloud resources.


Author(s):  
Kuttuva Rajendran Baskaran ◽  
Chellan Kalaiarasan

Combining Web caching and Web pre-fetching results in improving the bandwidth utilization, reducing the load on the origin server and reducing the delay incurred in accessing information. Web pre-fetching is the process of fetching the Web objects from the origin server which has more likelihood of being used in future. The fetched contents are stored in the cache. Web caching is the process of storing the popular objects ”closer” to the user so that they can be retrieved faster. In the literature many interesting works have been carried out separately for Web caching and Web pre-fetching. In this work, clustering technique is used for pre-fetching and SVM-LRU technique forWeb caching and the performance is measured in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). With the help of real data, it is demonstrated that the above approach is superior to the method of combining clustering based prefetching technique with traditional LRU page replacement method for Web caching.


2013 ◽  
Vol 22 (01) ◽  
pp. 1350002 ◽  
Author(s):  
RAMI RASHKOVITS ◽  
AVIGDOR GAL

Users of wide area network applications are usually concerned about both response time and content validity. The common solution of client-side caching that reuses cached content based on arbitrary time-to-live may not be applicable in narrow bandwidth environment, where heavy load is imposed on sparse transmission abilities. In such cases, some users may wait for a long time for fresh content extracted from the origin server although they would settle for obsolescent content, while other users may receive the cached copy which is considered valid, although they would be ready to wait longer for fresher content. In this work, a new model for caching is introduced, where clients introduce preferences regarding their expectations for the time they are willing to wait, and the level of obsolescence they are willing to tolerate. The cache manager considers user preferences, and is capable of balancing the relative importance of each dimension. A cost model is used to determine which of the following three alternatives is most promising: delivery of a local cached copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server. The proposed model is proven to be useful by experiments that used both synthetic data and real Web traces simulation. The experiments reveal that using the proposed model, it becomes possible to meet client needs with reduced latency. We also show the benefit of cache cooperation in increasing hit ratios and reducing latency. A prototype of the proposed model was built and deployed on real-world environment demonstrating how users can set preferences towards Web pages, and how cache managers are affected.


Author(s):  
R. C. Joshi ◽  
Manoj Misra ◽  
Narottam Chand

Caching at the mobile client is a potential technique that can reduce the number of uplink requests, lighten the server load, shorten the query latency and increase the data availability. A cache invalidation strategy ensures that any data item cached at a mobile client has same value as on the origin server. Traditional cache invalidation strategies make use of periodic broadcasting of invalidation reports (IRs) by the server. The IR approach suffers from long query latency, larger tuning time and poor utilization of bandwidth. Using updated invalidation report (UIR) method that replaces a small fraction of the recent updates, the query latency can be reduced. To improve upon the IR and UIR based strategies, this chapter presents a synchronous stateful cache maintenance technique called Update Report (UR). The proposed strategy outperforms the IR and UIR strategies by reducing the query latency, minimizing the disconnection overheads, optimizing the use of wireless channel and conserving the client energy.


2009 ◽  
pp. 3012-3020
Author(s):  
R. C. Joshi ◽  
Manoj Misra ◽  
Narottam Chand

Caching at the mobile client is a potential technique that can reduce the number of uplink requests, lighten the server load, shorten the query latency and increase the data availability. A cache invalidation strategy ensures that any data item cached at a mobile client has same value as on the origin server. Traditional cache invalidation strategies make use of periodic broadcasting of invalidation reports (IRs) by the server. The IR approach suffers from long query latency, larger tuning time and poor utilization of bandwidth. Using updated invalidation report (UIR) method that replaces a small fraction of the recent updates, the query latency can be reduced. To improve upon the IR and UIR based strategies, this chapter presents a synchronous stateful cache maintenance technique called Update Report (UR). The proposed strategy outperforms the IR and UIR strategies by reducing the query latency, minimizing the disconnection overheads, optimizing the use of wireless channel and conserving the client energy.


Sign in / Sign up

Export Citation Format

Share Document