scholarly journals Dimensioning Resilient Optical Grid/Cloud Networks

Author(s):  
Chris Develder ◽  
Massimo Tornatore ◽  
M. Farhan Habib ◽  
Brigitte Jaumard

Optical networks play a crucial role in the provisioning of grid and cloud computing services. Their high bandwidth and low latency characteristics effectively enable universal users access to computational and storage resources that thus can be fully exploited without limiting performance penalties. Given the rising importance of such cloud/grid services hosted in (remote) data centers, the various users (ranging from academics, over enterprises, to non-professional consumers) are increasingly dependent on the network connecting these data centers that must be designed to ensure maximal service availability, i.e., minimizing interruptions. In this chapter, the authors outline the challenges encompassing the design, i.e., dimensioning of large-scale backbone (optical) networks interconnecting data centers. This amounts to extensions of the classical Routing and Wavelength Assignment (RWA) algorithms to so-called anycast RWA but also pertains to jointly dimensioning not just the network but also the data center resources (i.e., servers). The authors specifically focus on resiliency, given the criticality of the grid/cloud infrastructure in today’s businesses, and, for highly critical services, they also include specific design approaches to achieve disaster resiliency.

2012 ◽  
Vol 3 (2) ◽  
pp. 51-59 ◽  
Author(s):  
Nawsher Khan ◽  
A. Noraziah ◽  
Elrasheed I. Ismail ◽  
Mustafa Mat Deris ◽  
Tutut Herawan

Cloud computing is fundamentally altering the expectations for how and when computing, storage, and networking resources should be allocated, managed, consumed, and allow users to utilize services globally. Due to the powerful computing and storage, high availability and security, easy accessibility and adaptability, reliable scalability and interoperability, cost and time effective cloud computing is the top, needed for current fast growing business world. A client, organization or a trade that adopting emerging cloud environment can choose a well suitable infrastructure, platform, software, and a network resource, for any business, where each one has some exclusive features and advantages. The authors first develop a comprehensive classification for describing cloud computing architecture. This classification help in survey of several existing cloud computing services developed by various projects globally such as Amazon, Google, Microsoft, Sun and Force.com and by using this survey’s results the authors identified similarities and differences of the architecture approaches of cloud computing.


Author(s):  
Marcus Tanque ◽  
Harry J Foxwell

Big data and cloud computing are transforming information technology. These comparable technologies are the result of dramatic developments in computational power, virtualization, network bandwidth, availability, storage capability, and cyber-physical systems. The crossroads of these two areas, involves the use of cloud computing services and infrastructure, to support large-scale data analytics research, providing relevant solutions or future possibilities for supply chain management. This chapter broadens the current posture of cloud computing and big data, as associate with the supply chain solutions. This chapter focuses on areas of significant technology and scientific advancements, which are likely to enhance supply chain systems. This evaluation emphasizes the security challenges and mega-trends affecting cloud computing and big data analytics pertaining to supply chain management.


Author(s):  
Rashmi Rai ◽  
G. Sahoo

The ever-rising demand for computing services and the humongous amount of data generated everyday has led to the mushrooming of power craving data centers across the globe. These large-scale data centers consume huge amount of power and emit considerable amount of CO2.There have been significant work towards reducing energy consumption and carbon footprints using several heuristics for dynamic virtual machine consolidation problem. Here we have tried to solve this problem a bit differently by making use of utility functions, which are widely used in economic modeling for representing user preferences. Our approach also uses Meta heuristic genetic algorithm and the fitness is evaluated with the utility function to consolidate virtual machine migration within cloud environment. The initial results as compared with existing state of art shows marginal but significant improvement in energy consumption as well as overall SLA violations.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


Author(s):  
Punit Gupta ◽  
Ravi Shankar Jha

With increase of information sharing over the internet or intranet, we require techniques to increase the availability of shared resource over large number of users trying to access the resources at the same time. Many techniques are being proposed to make access easy and more secure in distributed environment. Information retrieval plays an important to serve the most reliant data in least waiting, this chapter discuses all such techniques for information retrieval and sharing over the cloud infrastructure. Cloud Computing services provide better performance in terms of resource sharing and resource access with high reliability and scalability under high load.


2017 ◽  
Vol 27 (3) ◽  
pp. 605-622 ◽  
Author(s):  
Marcin Markowski

AbstractIn recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.


Author(s):  
Mohit Mathur ◽  
◽  
Mamta Madan ◽  
Mohit Chandra Saxena ◽  
◽  
...  

Emerging technologies like IoT (Internet of Things) and wearable devices like Smart Glass, Smart watch, Smart Bracelet and Smart Plaster produce delay sensitive traffic. Cloud computing services are emerging as supportive technologies by providing resources. Most services like IoT require minimum delay which is still an area of research. This paper is an effort towards the minimization of delay in delivering cloud traffic, by geographically localizing the cloud traffic through establishment of Cloud mini data centers. The anticipated architecture suggests a software defined network supported mini data centers connected together. The paper also suggests the use of segment routing for stitching the transport paths between data centers through Software defined Network Controllers.


The proliferation of Cloud Computing has opened new and attractive offerings for consumers. Cloud Service Providers promote and market packages of cloud computing services that cater to diverse opportunities and user applications. While this has obvious advantages, there are certain factors that are a cause for concern. Monitoring the underlying infrastructure that supports the entire fabric of cloud computing is an aspect that requires a great deal of attention. The aspect of monitoring takes on a great deal of significance when performance and robustness of cloud service on offer is taken into consideration. Although research has been conducted into various cloud computing monitoring techniques, there is scope and room yet for an integrated cloud monitoring solution that can fulfill the requirements of cloud administrators to ensure optimal performance of the underlying infrastructure of a cloud computing network. In this paper, we propose a unified monitoring model that is essentially a composite framework involving hardware and network layers. Studies conducted during our experiments suggest that our unified cloud monitoring approach can significantly aid in reducing overall carbon emissions while helping meeting compliance and audit norms by ensuring that the underlying cloud infrastructure is monitored closely


2014 ◽  
Vol 556-562 ◽  
pp. 6262-6265 ◽  
Author(s):  
Yuan Zhang ◽  
Lei Huang

Cloud computing is developing rapidly in recent years. Everyone is talking about cloud computing that provide large scale service to replace computers and software. Cloud computing has already become the development trend of present IT circles. This paper explains the basics of Cloud computing, analyzes the difference of cloud computing services, and points out the characteristics of the cloud computing services.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


Sign in / Sign up

Export Citation Format

Share Document