Next Generation Content Delivery Infrastructures
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781466617940, 9781466617957

Author(s):  
Jewel Okyere-Benya ◽  
Georgios Exarchakos ◽  
Vlado Menkovski ◽  
Antonio Liotta ◽  
Paolo Giaccone

Evolving paradigms of parallel transport mechanisms are necessary to satisfy the ever increasing need of high performing communication systems. Parallel transport mechanisms can be described as a technique to send several data simultaneously using several parallel channels. The authors’ survey captures the entire building blocks in designing next generation parallel transport mechanisms by firstly analyzing the basic structure of a transport mechanism using a point to point scenario. They then proceed to segment parallel transport into four categories and describe some of the most sophisticated technologies such as Multipath under Point to Point, Multicast under Point to Multipoint, Parallel downloading under Multipoint to Point, and Peer to Peer streaming under Multipoint to Multipoint. The Survey enables the authors to stipulate that high performing parallel transport mechanisms can be achieved by integrating the most efficient technologies under these categories, while using the most efficient underlying Point to Point transport protocols.


Author(s):  
Israel Pérez-Llopis ◽  
Carlos E. Palau ◽  
Manuel Esteve

Wireless video streaming, and specifically IPTV, has been a key challenge during the last decade, including the provision of access to users on an always best connected basis using different wireless access networks, including continuous seamless mobility. There are different proposals including IP based video streaming, DVB-H, or MediaFLO to carry out IPTV and video streaming on demand to users in a wireless environment, but one of the most relevant elements is the architecture of the service, with all the components of the delivery process. In this work the authors propose an alternative architecture based on a wireless Content Delivery Network, optimized to distribute video to mobile terminals in order to create a triple screen platform; considering that the main available wireless access networks are WiFi, WiMAX, and 3G, this work focuses on the last two. Surrogates within the CDN architecture act as video streaming servers, while the origin servers in the content providers carry out the transcoding process in order to be compliant with individual client requirements.


Author(s):  
Jesús M. Barbero

The spreading of new systems of broadcasting and distribution of multimedia content has had as a consequence a larger need for aggregation of data and metadata to traditionally based contents of video and audio supply. Broadcasting chains of this type of channels have become overwhelmed by the quantity of resources, infrastructures, and development needed for these channels to provide information. In order to avoid this kind of shortcoming, several recommendations and standards have been created to exchange metadata between production and distribution of taped programs. The problem lies in live programs; producers sometimes offer data to channels, but most often, channels are not able to face required developments. The key to this problem is cost reduction. In this work, a study is conducted on added services which producers may provide to the media about content; a system is found by which additional communication expenses are not made, and a model of information transfer is offered which allows low cost developments to supply new media platforms.


Author(s):  
Giancarlo Fortino ◽  
Carlos Calafate ◽  
Pietro Manzoni

In this work, the authors apply raptor codes to obtain a reliable broadcast system of non-time critical contents, such as multimedia advertisement and entertainment files, in urban environments. Vehicles in urban environments are characterized by a variable speed and by the fact that the propagation of the radio signal is constrained by the configuration of the city structure. Through real experiments, the authors demonstrate that raptor codes are the best option among the available Forward Error Correction techniques to achieve their purpose. Moreover, the system proposed uses traffic control techniques for classification and filtering of information. These techniques allow assigning different priorities to contents in order to receive firstly the most important ones from broadcasting antennas. In particular, as vehicle speed and/or distance from the broadcasting antenna increase, performance results highlight that these techniques are the only choice for a reliable data content delivery.


Author(s):  
Benjamin Molina ◽  
Carlos E. Palau ◽  
Manuel Esteve

Content Distribution Networks (CDN) appeared a decade ago as a method for reducing latencies, improving performance experienced by Internet users, and limiting the effect of flash-crowds, so as balance load in servers. Content Distribution has evolved in different ways (e.g. cloud computing structures and video streaming distribution infrastructures). The solution proposed in early CDN was the location of several controlled caching servers close to clients, organized and managed by a central control system. Many companies deployed their own CDN infrastructure– and so demonstrating the resulting effectiveness. However, the business model of these networks has evolved from the distribution of static web objects to video streaming. Many aspects of deployment and implementation remain proprietary, evidencing the lack of a general CDN model, although the main design concepts are widely known. In this work, the authors represent the structure of a CDN and the performance of some of its parameters, using queuing theory, simplifying the redirection schema and studying the elements that could determine the improvement in performance. The main contribution of the work is a general expression for a CDN environment and the relationship between different variables like caching hit ratios, network latency, number of surrogates, and server capacity; this proves that the use of CDN outperform the typical client/server architecture.


Author(s):  
Nadia Ranaldo ◽  
Eugenio Zimeo

Broadband network technologies have improved the bandwidth of the edge of the Internet, but its core is still a bottleneck for large file transfers. Content Delivery Networks (CDNs), built at the edge of the Internet, are able to reduce the workload of network backbones, but their scalability and network reach is often limited, especially in case of QoS-bound delivery services. By using the emerging CDN internetworking, a CDN can dynamically exploit resources of other cooperating CDNs to face peak loads and temporary malfunctions without violating QoS levels negotiated with content providers. In this chapter, after a wide discussion of the problem, the authors propose an architectural schema and an algorithm, based on the divisible load theory, which optimizes delivery of large data files by satisfying an SLA, agreed with a content provider, while respecting the maximum budget that the delivering CDN can pay to peer CDNs to ensure its revenue.


Author(s):  
Zhiming Zhao ◽  
Paola Grosso ◽  
Jeroen van der Ham ◽  
Cees Th. A.M. de Laat

Moving large quantities of data between distributed parties is a frequently invoked process in data intensive applications, such as collaborative digital media development. These transfers often have high quality requirements on the network services, especially when they involve user interactions or require real time processing on large volumes of data. The best effort services provided by IP-routed networks give limited guarantee on the delivery performance. Advanced networks such as hybrid networks make it feasible for high level applications, such as workflows, to request network paths and service provisioning. However, the quality of network services has so far rarely been considered in composing and executing workflow processes; applications tune the execution quality selecting only optimal software services and computing resources, and neglecting the network components. In this chapter, the authors provide an overview on this research domain, and introduce a system called NEtWork QoS Planner (NEWQoSPlanner) to provide support for including network services in high level workflow applications.


Author(s):  
Giancarlo Fortino ◽  
Wilma Russo

Technologies and applications that enable multi-party, multimedia communications are becoming more and more pervasive in every facet of daily lives: from distance learning to remote job training, from peer-to-peer conferencing to distributed virtual meetings. To effectively use the evolving Internet infrastructure as ubiquitously accessible platform for the delivery of multi-faceted multimedia services, not only are advances in multimedia communications required but also novel software infrastructures are to be designed to cope with network and end-system heterogeneity, improve management and control of multimedia distributed services, and deliver sustainable QoS levels to end users. In this chapter, the authors propose a holistic approach based on agent-oriented middleware integrating active services, mobile event-driven agents, and multimedia internetworking technology for the component-based prototyping, dynamic deployment, and management of Internet-based real-time multimedia services. The proposed approach is enabled by a distributed software infrastructure (named Mobile Agent Multimedia Space – MAMS) based on event-driven mobile agents and multimedia coordination spaces. In particular, a multimedia coordination space is a component-based architecture consisting of components (players, streamers, transcoders, dumper, forwarders, archivers, GUI adapters, multimedia timers) that provide basic real-time multimedia services. The event-driven mobile agents act as orchestrators of the multimedia space and are capable of migrating across the network to dynamically create and deploy complex media services. The effectiveness and potential of the proposed approach are described through a case study involving the on-demand deployment and management of an adaptive cooperative playback service.


Author(s):  
Mukaddim Pathan ◽  
James Broberg ◽  
Rajkumar Buyya

Extending the traditional Content Delivery Network (CDN) model to use Cloud Computing is highly appealing. It allows developing a truly on-demand CDN architecture based upon standards designed to ease interoperability, scalability, performance, and flexibility. To better understand the system model, necessity, and perceived advantages of Cloud-based CDNs, this chapter provides an extensive coverage and comparative analysis of the state of the art. It also provides a case study on the MetaCDN Content Delivery Cloud, along with highlights of empirical performance observations from its world-wide distributed platform.


Author(s):  
Alfredo Cuzzocrea ◽  
Marcel Karnstedt ◽  
Manfred Hauswirth ◽  
Kai-Uwe Sattler ◽  
Roman Schmidt

Range queries are a very powerful tool in a wide range of data management systems and are vital to a multitude of applications. The hierarchy of structured overlay systems can be utilized in order to provide efficient techniques for processing them, resulting in the support of applications and techniques based on range queries in large-scale distributed information systems. On the other hand, due to the rapid development of the Web, applications based on the P2P paradigm gain more and more interest, having such systems started to evolve towards adopting standard database functionalities in terms of complex query processing support. This goes far beyond simple key lookups, as provided by standard distributed hashtables (DHTs) systems, which makes estimating the completeness of query answers a crucial challenge. Unfortunately, due to the limited knowledge and the usually best-effort characteristics, deciding about the completeness of query results, e.g., getting an idea when a query is finished or what amount of results is still missing, is very challenging. There is not only an urgent need to provide this information to the user issuing queries, but also for implementing sophisticated and efficient processing techniques based on them. In this chapter, the authors propose a method for solving this task. They discuss the applicability and quality of the estimations, present an implementation and evaluation for the P-Grid system, and show how to adapt the technique to other overlays. The authors also discuss the semantics of completeness for complex queries in P2P database systems and propose methods based on the notion of routing graphs for estimating the number of expected query answers. Finally, they discuss probabilistic guarantees for the estimated values and evaluate the proposed methods through an implemented system.


Sign in / Sign up

Export Citation Format

Share Document