scholarly journals A Proposed Architecture f or Placement of Cloud Data Centre in Software Defined Network Environment

Author(s):  
Mohit Mathur ◽  
◽  
Mamta Madan ◽  
Mohit Chandra Saxena ◽  
◽  
...  

Emerging technologies like IoT (Internet of Things) and wearable devices like Smart Glass, Smart watch, Smart Bracelet and Smart Plaster produce delay sensitive traffic. Cloud computing services are emerging as supportive technologies by providing resources. Most services like IoT require minimum delay which is still an area of research. This paper is an effort towards the minimization of delay in delivering cloud traffic, by geographically localizing the cloud traffic through establishment of Cloud mini data centers. The anticipated architecture suggests a software defined network supported mini data centers connected together. The paper also suggests the use of segment routing for stitching the transport paths between data centers through Software defined Network Controllers.

Author(s):  
Rashmi Rai ◽  
G. Sahoo

The ever-rising demand for computing services and the humongous amount of data generated everyday has led to the mushrooming of power craving data centers across the globe. These large-scale data centers consume huge amount of power and emit considerable amount of CO2.There have been significant work towards reducing energy consumption and carbon footprints using several heuristics for dynamic virtual machine consolidation problem. Here we have tried to solve this problem a bit differently by making use of utility functions, which are widely used in economic modeling for representing user preferences. Our approach also uses Meta heuristic genetic algorithm and the fitness is evaluated with the utility function to consolidate virtual machine migration within cloud environment. The initial results as compared with existing state of art shows marginal but significant improvement in energy consumption as well as overall SLA violations.


Author(s):  
Malay Kumar ◽  
Manu Vardhan

The growth of the cloud computing services and its proliferation in business and academia has triggered enormous opportunities for computation in third-party data management settings. This computing model allows the client to outsource their large computations to cloud data centers, where the cloud server conducts the computation on their behalf. But data privacy and computational integrity are the biggest concern for the client. In this article, the authors attempt to present an algorithm for secure outsourcing of a covariance matrix, which is the basic building block for many automatic classification systems. The algorithm first performs some efficient transformation to protect the privacy and verify the computed result produced by the cloud server. Further, an analytical and experimental analysis shows that the algorithm is simultaneously meeting the design goals of privacy, verifiability and efficiency. Also, found that the proposed algorithm is about 7.8276 times more efficient than the direct implementation.


2018 ◽  
Vol 12 (2) ◽  
pp. 1-25 ◽  
Author(s):  
Malay Kumar ◽  
Manu Vardhan

The growth of the cloud computing services and its proliferation in business and academia has triggered enormous opportunities for computation in third-party data management settings. This computing model allows the client to outsource their large computations to cloud data centers, where the cloud server conducts the computation on their behalf. But data privacy and computational integrity are the biggest concern for the client. In this article, the authors attempt to present an algorithm for secure outsourcing of a covariance matrix, which is the basic building block for many automatic classification systems. The algorithm first performs some efficient transformation to protect the privacy and verify the computed result produced by the cloud server. Further, an analytical and experimental analysis shows that the algorithm is simultaneously meeting the design goals of privacy, verifiability and efficiency. Also, found that the proposed algorithm is about 7.8276 times more efficient than the direct implementation.


Cloud computing, as one of the emerging technologies, is required to meet increasing and on-demand availability of all resources such as computing power, storage and networking. It provides the pool of resources and offers the mechanism to run various applications on Cloud Data Centers (CDCs). Cloud data centers are usually distributed across multiple sites. With the provision of distributed data centers, the need for their management and maintenance arises. Such circumstances lead networking capabilities to being implemented on larger scale. Software defined network (SDN) is the programming paradigm along with the NFV orchestration and incorporates the modular and dynamic network support for the cloud data centers provided and established by the cloud over the geographically separated places. In this paper, comprehensive study has been mainly conducted to highlight the need for the integration of both emerging and versatile technologies. We also cover challenges, issues and benefits which have to be considered in underlying architecture, models and devices.


2017 ◽  
Author(s):  
Mohammad Noormohammadpour ◽  
Cauligi S. Raghavendra

Datacenters are the main infrastructure on top of which cloud computing services are offered. Such infrastructure may be shared by a large number of tenants and applications generating a spectrum of datacenter traffic. Delay sensitive applications and applications with specific Service Level Agreements (SLAs), generate deadline constrained flows, while other applications initiate flows that are desired to be delivered as early as possible. As a result, datacenter traffic is a mix of two types of flows: deadline and regular. There are several scheduling policies for either traffic type with focus on minimizing completion times or deadline miss rate. In this report, we apply several scheduling policies to mix traffic scenario while varying the ratio of regular to deadline traffic. We consider FCFS (First Come First Serve), SRPT (Shortest Remaining Processing Time) and Fair Sharing as deadline agnostic approaches and a combination of Earliest Deadline First (EDF) with either FCFS or SRPT as deadline-aware schemes. In addition, for the latter, we consider both cases of prioritizing deadline traffic (Deadline First) and prioritizing regular traffic (Deadline Last). We study both light-tailed and heavy-tailed flow size distributions and measure mean, median and tail flow completion times (FCT) for regular flows along with Deadline Miss Rate (DMR) and average lateness for deadline flows. We also consider two operation regimes of lightly-loaded (low utilization) and heavily-loaded (high utilization). We find that performance of deadline-aware schemes is highly dependent on fraction of deadline traffic. With light-tailed flow sizes, we find that FCFS performs better in terms of tail times and average lateness while SRPT performs better in average times and deadline miss rate. For heavy-tailed flow sizes, except for tail times, SRPT performs better in all other metrics.


Author(s):  
Chris Develder ◽  
Massimo Tornatore ◽  
M. Farhan Habib ◽  
Brigitte Jaumard

Optical networks play a crucial role in the provisioning of grid and cloud computing services. Their high bandwidth and low latency characteristics effectively enable universal users access to computational and storage resources that thus can be fully exploited without limiting performance penalties. Given the rising importance of such cloud/grid services hosted in (remote) data centers, the various users (ranging from academics, over enterprises, to non-professional consumers) are increasingly dependent on the network connecting these data centers that must be designed to ensure maximal service availability, i.e., minimizing interruptions. In this chapter, the authors outline the challenges encompassing the design, i.e., dimensioning of large-scale backbone (optical) networks interconnecting data centers. This amounts to extensions of the classical Routing and Wavelength Assignment (RWA) algorithms to so-called anycast RWA but also pertains to jointly dimensioning not just the network but also the data center resources (i.e., servers). The authors specifically focus on resiliency, given the criticality of the grid/cloud infrastructure in today’s businesses, and, for highly critical services, they also include specific design approaches to achieve disaster resiliency.


2013 ◽  
Vol 3 (4) ◽  
pp. 13-27 ◽  
Author(s):  
Jitendra Singh ◽  
Vikas Kumar

Outage in cloud computing services is a critical issue and is primarily attributed to the single data center connectivity. To address the cloud outage, this work proposes a model for the subscription and selection of more than one data center. Selection of data center can be determined by the usage of broker at the user ends itself. Provision of broker at user's end reduces the overhead at provider's end; as a result performance of cloud data center improves. For the selection of appropriate data center, broker takes the feedback from the available data centers, and select one of them. During the selection of cloud, their status (up/down) at that particular time is also considered. In case of outage at one data center, other can be selected from the available list. Broker also facilitates the homogeneous use of cloud by allotting the load to less congested data centers. Experimental results revealed that multiple data center approach is not only helpful in countering the outage (as other data center can be selected from the broker) but also the usage cost.


2018 ◽  
Vol 56 (2) ◽  
pp. 118-126 ◽  
Author(s):  
Rajat Chaudhary ◽  
Gagangeet Singh Aujla ◽  
Neeraj Kumar ◽  
Joel J.P.C. Rodrigues

2018 ◽  
Vol 12 (6) ◽  
pp. 143 ◽  
Author(s):  
Osama Harfoushi ◽  
Ruba Obiedat

Cloud computing is the delivery of computing resources over the Internet. Examples include, among others, servers, storage, big data, databases, networking, software, and analytics. Institutes that provide cloud computing services are called providers. Cloud computing services were primarily developed to help IT professionals through application development, big data storage and recovery, website hosting, on-demand software delivery, and analysis of significant data patterns that could compromise a system’s security. Given the widespread availability of cloud computing, many companies have begun to implement the system because it is cost-efficient, reliable, scalable, and can be accessed from anywhere at any time. The most demanding feature of a cloud computing system is its security platform, which uses cryptographic algorithm levels to enhance protection of unauthorized access, modification, and denial of services. For the most part, cloud security uses algorithms to ensure the preservation of big data stored on remote servers. This study proposes a methodology to reduce concerns about data privacy by using cloud computing cryptography algorithms to improve the security of various platforms and to ensure customer satisfaction.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Jun Ma ◽  
Minshen Wang ◽  
Jinbo Xiong ◽  
Yongjin Hu

Cloud data, the ownership of which is separated from their administration, usually contain users’ private information, especially in the fifth-generation mobile communication (5G) environment, because of collecting data from various smart mobile devices inevitably containing personal information. If it is not securely deleted in time or the result of data deletion cannot be verified after their expiration, this will lead to serious issues, such as unauthorized access and data privacy disclosure. Therefore, this affects the security of cloud data and hinders the development of cloud computing services seriously. In this paper, we propose a novel secure data deletion and verification (SDVC) scheme based on CP-ABE to achieve fine-grained secure data deletion and deletion verification for cloud data. Based on the idea of access policy in CP-ABE, we construct an attribute association tree to implement fast revoking attribute and reencrypting key to achieve fine-grained control of secure key deletion. Furthermore, we build a rule transposition algorithm to generate random data blocks and combine the overwriting technology with the Merkle hash tree to implement secure ciphertext deletion and generate a validator, which is then used to verify the result of data deletion. We prove the security of the SDVC scheme under the standard model and verify the correctness and effectiveness of the SDVC scheme through theoretical analysis and ample simulation experiment results.


Sign in / Sign up

Export Citation Format

Share Document