Advances in Systems Analysis, Software Engineering, and High Performance Computing - Communication Infrastructures for Cloud Computing
Latest Publications


TOTAL DOCUMENTS

20
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781466645226, 9781466645233

Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Syed Ali Haider ◽  
M. Yasin Akhtar Raja ◽  
Khurram Kazi

Access networks are usually termed “last-mile/first-mile” networks since they connect the end user with the metro-edge network (or the exchange). This connectivity is often at data rates that are significantly slower than the data rates available at metro and core networks. Metro networks span large cities and core networks connect cities or bigger regions together by forming a backbone network on which traffic from an entire city is transported. With the industry achieving up to 400 Gbps of data rates at core networks (and increasing those rates [Reading, 2013]), it is critical to have high-speed access networks that can cope with the tremendous bandwidth opportunity and not act as a bottleneck. The opportunity lies in enabling services that can be of benefit to the consumers as well as large organizations. For instance, moving institutional/personal data to the cloud will require a high-speed access network that can overcome delays incurred during upload and download of information. Cloud-based services, such as computing and storage services are further enhanced with the availability of such high-speed access networks. Access networks have evolved over time and the industry is constantly looking for ways to improve their capacity. Therefore, an understanding of the fundamental technologies involved in wired and wireless access networks will help the reader appreciate the full potential of the cloud and cloud access. Against the same backdrop, this chapter aims at providing an understanding of the evolution of access technologies that enable the tremendous mobility potential of cloud-based services in the contemporary cloud paradigm.


Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Author(s):  
Wesam Dawoud ◽  
Ibrahim Takouna ◽  
Christoph Meinel

Elasticity and on-demand are significant characteristics that attract many customers to host their Internet applications in the cloud. They allow quick reacting to changing application needs by adding or releasing resources responding to the actual rather than to the projected demand. Nevertheless, neglecting the overhead of acquiring resources, which mainly is attributed to networking overhead, can result in periods of under-provisioning, leading to degrading the application performance. In this chapter, the authors study the possibility of mitigating the impact of resource provisioning overhead. They direct the study to an Infrastructure as a Service (IaaS) provisioning model where application scalability is the customer’s responsibility. The research shows that understanding the application utilization models and a proper tuning of the scalability parameters can optimize the total cost and mitigate the impact of the overhead of acquiring resources on-demand.


Author(s):  
Chris Develder ◽  
Massimo Tornatore ◽  
M. Farhan Habib ◽  
Brigitte Jaumard

Optical networks play a crucial role in the provisioning of grid and cloud computing services. Their high bandwidth and low latency characteristics effectively enable universal users access to computational and storage resources that thus can be fully exploited without limiting performance penalties. Given the rising importance of such cloud/grid services hosted in (remote) data centers, the various users (ranging from academics, over enterprises, to non-professional consumers) are increasingly dependent on the network connecting these data centers that must be designed to ensure maximal service availability, i.e., minimizing interruptions. In this chapter, the authors outline the challenges encompassing the design, i.e., dimensioning of large-scale backbone (optical) networks interconnecting data centers. This amounts to extensions of the classical Routing and Wavelength Assignment (RWA) algorithms to so-called anycast RWA but also pertains to jointly dimensioning not just the network but also the data center resources (i.e., servers). The authors specifically focus on resiliency, given the criticality of the grid/cloud infrastructure in today’s businesses, and, for highly critical services, they also include specific design approaches to achieve disaster resiliency.


Author(s):  
João Soares ◽  
Romeu Monteiro ◽  
Márcio Melo ◽  
Susana Sargento ◽  
Jorge Carapinha

The access infrastructure to the cloud is usually a major drawback that limits the uptake of cloud services. Attention has turned to rethinking a new architectural deployment of the overall cloud service delivery. In this chapter, the authors argue that it is not sufficient to integrate the cloud domain with the operator’s network domain based on the current models. They envision a full integration of cloud and network, where cloud resources are no longer confined to a data center but are spread throughout the network and owned by the network operator. In such an environment, challenges arise at different levels, such as in resource management, where both cloud and network resources need to be managed in an integrated approach. The authors particularly address the resource allocation problem through joint virtualization of network and cloud resources by studying and comparing an Integer Linear Programming formulation and a heuristic algorithm.


Author(s):  
Antonio Celesti ◽  
Antonio Puliafito ◽  
Francesco Tusa ◽  
Massimo Villari

Cloud federation is paving the way toward new business scenarios in which it is possible to enforce more flexible energy management strategies than in the past. Considering independent cloud providers, each one is exclusively bound to the specific energy supplier powering its datacenter. The situation radically changes if we consider a federation of cloud providers powered by both a conventional energy supplier and a renewable energy generator. In such a context, the opportune relocation of computational workload among providers can lead to a global energy sustainability policy for the whole federation. In this work, the authors investigate the advantages and issues for the achievement of such a sustainable environment.


Author(s):  
Luiz F. Bittencourt ◽  
Edmundo R. M. Madeira ◽  
Nelson L. S. da Fonseca

Organizations owning a datacenter and leasing resources from public clouds need to efficiently manage this heterogeneous infrastructure. In order to do that, automatic management of processing, storage, and networking is desirable to support the use of both private and public cloud resources at the same time, composing the so-called hybrid cloud. In this chapter, the authors introduce the hybrid cloud concept and several management components needed to manage this infrastructure. They depict the network as a fundamental component to provide quality of service, discussing its influence in the hybrid cloud management and resource allocation. Moreover, the authors present the uncertainty in the network channels as a problem to be tackled to avoid application delays and unexpected costs from the leasing of public cloud resources. Challenging issues in the hybrid cloud management is the last topic of this chapter before the concluding remarks.


Author(s):  
Taisir E.H. El-Gorashi ◽  
Ahmed Lawey ◽  
Xiaowen Dong ◽  
Jaafar Elmirghani

In this chapter, the authors investigate the power consumption associated with content distribution networks. They study, through Mixed Integer Linear Programming (MILP) models and simulations, the optimization of data centre locations in a Client/Server (C/S) system over an IP over WDM network so as to minimize the network power consumption. The authors investigate the impact of the IP over WDM routing approach, traffic profile, and number of data centres. They also investigate how to replicate content of different popularity into multiple data centres and develop a novel routing algorithm, Energy-Delay Optimal Routing (EDOR), to minimize the power consumption of the network under replication while maintaining QoS. Furthermore, they investigate the energy efficiency of BitTorrent, the most popular Peer-to-Peer (P2P) content distribution application, and compare it to the C/S system. The authors develop an MILP model to minimize the power consumption of BitTorrent over IP over WDM networks while maintaining its performance. The model results reveal that peers co-location awareness helps reduce BitTorrent cross traffic and consequently reduces the power consumption at the network side. For a real time implementation, they develop a simple heuristic based on the model insights.


Sign in / Sign up

Export Citation Format

Share Document