Cloud-TM

2015 ◽  
pp. 749-781
Author(s):  
João Barreto ◽  
Pierangelo Di Sanzo ◽  
Roberto Palmieri ◽  
Paolo Romano

By shifting data and computation away from local servers towards very large scale, world-wide spread data centers, Cloud Computing promises very compelling benefits for both cloud consumers and cloud service providers: freeing corporations from large IT capital investments via usage-based pricing schemes, drastically lowering barriers to entry and capital costs; leveraging the economies of scale for both services providers and users of the cloud; facilitating deployment of services; attaining unprecedented scalability levels. However, the promise of infinite scalability catalyzing much of the recent hype about Cloud Computing is still menaced by one major pitfall: the lack of programming paradigms and abstractions capable of bringing the power of parallel programming into the hands of ordinary programmers. This chapter describes Cloud-TM, a self-optimizing middleware platform aimed at simplifying the development and administration of applications deployed on large scale Cloud Computing infrastructures.

Author(s):  
João Barreto ◽  
Pierangelo Di Sanzo ◽  
Roberto Palmieri ◽  
Paolo Romano

By shifting data and computation away from local servers towards very large scale, world-wide spread data centers, Cloud Computing promises very compelling benefits for both cloud consumers and cloud service providers: freeing corporations from large IT capital investments via usage-based pricing schemes, drastically lowering barriers to entry and capital costs; leveraging the economies of scale for both services providers and users of the cloud; facilitating deployment of services; attaining unprecedented scalability levels. However, the promise of infinite scalability catalyzing much of the recent hype about Cloud Computing is still menaced by one major pitfall: the lack of programming paradigms and abstractions capable of bringing the power of parallel programming into the hands of ordinary programmers. This chapter describes Cloud-TM, a self-optimizing middleware platform aimed at simplifying the development and administration of applications deployed on large scale Cloud Computing infrastructures.


Author(s):  
Olexander Melnikov ◽  
◽  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Viktor Kosenko ◽  
...  

The article considers the development and implementation of cloud services in the work of government agencies. The classification of the choice of cloud service providers is offered, which can serve as a basis for decision making. The basics of cloud computing technology are analyzed. The COVID-19 pandemic has identified the benefits of cloud services in remote work Government agencies at all levels need to move to cloud infrastructure. Analyze the prospects of cloud computing in Ukraine as the basis of e-governance in development. This is necessary for the rapid provision of quality services, flexible, large-scale and economical technological base. The transfer of electronic information interaction in the cloud makes it possible to attract a wide range of users with relatively low material costs. Automation of processes and their transfer to the cloud environment make it possible to speed up the process of providing services, as well as provide citizens with minimal time to obtain certain information. The article also lists the risks that exist in the transition to cloud services and the shortcomings that may arise in the process of using them.


2009 ◽  
Vol 19 (03) ◽  
pp. 435-449 ◽  
Author(s):  
CHRISTINE MORIN ◽  
YVON JÉGOU ◽  
JÉRÔME GALLARD ◽  
PIERRE RITEAU

The emerging cloud computing model has recently gained a lot of interest both from commercial companies and from the research community. XtreemOS is a distributed operating system for large-scale wide-area dynamic infrastructures spanning multiple administrative domains. XtreemOS, which is based on the Linux operating system, has been designed as a Grid operating system providing native support for virtual organizations. In this paper, we discuss the positioning of XtreemOS technologies with regard to cloud computing. More specifically, we investigate a scenario where XtreemOS could help users take full advantage of clouds in a global environment including their own resources and cloud resources. We also discuss how the XtreemOS system could be used by cloud service providers to manage their underlying infrastructure. This study shows that the XtreemOS distributed operating system is a highly relevant technology in the new era of cloud computing where future clouds seamlessly span multiple bare hardware providers and where customers extend their IT infrastructure by provisioning resources from different cloud service providers.


Author(s):  
Manoj V. Thomas ◽  
K. Chandrasekaran

Cloud Computing has become the popular paradigm for accessing the various scalable and on-demand computing services over the internet. Nowadays, individual Cloud Service Providers (CSPs) offering specialized services to the customers collaborate to form the Cloud Federation, in order to reap the real benefits of Cloud Computing. By collaboration, the member CSPs of the federation achieve better resource utilization and Quality of Service (QoS), thereby increasing their business prospects. When a CSP runs out of resources in the Cloud Federation, in order to offload the customer requests for resources to other CSP(s), identifying a suitable partner is a challenging task due to the lack of global coordination among them. In this paper, we propose the design and implementation of an efficient partner selection mechanism in the Cloud Federation, using the Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods, and also considering the trust values of various CSPs in the federation. The AHP method is used to calculate the weights of the QoS parameters used in the TOPSIS method which is used to rank the various CSPs in the Cloud Federation according to the user requirements. Simulation results show the effectiveness of this approach in order to efficiently select the trustworthy partners in large scale federations to ensure the required QoS to the cloud consumers.


Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 38 ◽  
Author(s):  
Altaf Hussain ◽  
Muhammad Aleem

Developers of resource-allocation and scheduling algorithms share test datasets (i.e., benchmarks) to enable others to compare the performance of newly developed algorithms. However, mostly it is hard to acquire real cloud datasets due to the users’ data confidentiality issues and policies maintained by Cloud Service Providers (CSP). Accessibility of large-scale test datasets, depicting the realistic high-performance computing requirements of cloud users, is very limited. Therefore, the publicly available real cloud dataset will significantly encourage other researchers to compare and benchmark their applications using an open-source benchmark. To meet these objectives, the contemporary state of the art has been scrutinized to explore a real workload behavior in Google cluster traces. Starting from smaller- to moderate-size cloud computing infrastructures, the dataset generation process is demonstrated using the Monte Carlo simulation method to produce a Google Cloud Jobs (GoCJ) dataset based on the analysis of Google cluster traces. With this article, the dataset is made publicly available to enable other researchers in the field to investigate and benchmark their scheduling and resource-allocation schemes for the cloud. The GoCJ dataset is archived and available on the Mendeley Data repository.


2015 ◽  
Vol 15 (4) ◽  
pp. 6643-6648
Author(s):  
Prabhpreet Kaur ◽  
Monika Sachdeva

Cloud computing is an increasingly popular paradigm for accessing computing resources. In practice, cloud service providers tend to offer services that can be grouped into three categories: software as a service, platform as a service, and infrastructure as a service. Cloud computing represents a shift away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centers – or ‘clouds’. This paper discusses some of the research challenges for cloud computing from an enterprise or organizational perspective, and puts them in context by reviewing the existing body of literature in cloud computing. Various research challenges relating to the following topics are discussed: the organizational changes brought about by cloud computing; the economic and organizational implications of its utility billing model; the security, legal and privacy issues that cloud computing raises. It is important to highlight these research challenges because cloud computing is not simply about a technological improvement of data centers but a fundamental change in how IT is provisioned and used. This type of research has the potential to influence wider adoption of cloud computing in enterprise, and in the consumer market too.


Author(s):  
Nitin Vishnu Choudhari ◽  
Dr. Ashish B Sasankar

Abstract –Today Security issue is the topmost problem in the cloud computing environment. It leads to serious discomfort to the Governance and end-users. Numerous security solutions and policies are available however practically ineffective in use. Most of the security solutions are centered towards cloud technology and cloud service providers only and no consideration has been given to the Network, accessing, and device securities at the end-user level. The discomfort at the end-user level was left untreated. The security of the various public, private networks, variety of devices used by end-users, accessibility, and capacity of end-users is left untreated. This leads towards the strong need for the possible modification of the security architecture for data security at all levels and secured service delivery. This leads towards the strong need for the possible adaption of modified security measures and provisions, which shall provide secured hosting and service delivery at all levels and reduce the security gap between the cloud service providers and end-users. This paper investigates the study and analyze the security architecture in the Cloud environment of Govt. of India and suggest the modifications in the security architecture as per the changing scenario and to fulfill the future needs for the secured service delivery from central up to the end-user level. Keywords: Cloud Security, Security in GI Cloud, Cloud Security measures, Security Assessment in GI Cloud, Proposed Security for GI cloud


Author(s):  
VINITHA S P ◽  
GURUPRASAD E

Cloud computing has been envisioned as the next generation architecture of IT enterprise. It moves the application software and databases to the centralized large data centers where management of data and services may not be fully trustworthy. This unique paradigm brings out many new security challenges like, maintaining correctness and integrity of data in cloud. Integrity of cloud data may be lost due to unauthorized access, modification or deletion of data. Lacking of availability of data may be due to the cloud service providers (CSP), in order to increase their margin of profit by reducing the cost, CSP may discard rarely accessed data without detecting in timely fashion. To overcome above issues, flexible distributed storage, token utilizing, signature creations used to ensure integrity of data, auditing mechanism used assists in maintaining the correctness of data and also locating, identifying of server where exactly the data has been corrupted and also dependability and availability of data achieved through distributed storage of data in cloud. Further in order to ensure authorized access to cloud data a admin module has been proposed in our previous conference paper, which prevents unauthorized users from accessing data and also selective storage scheme based on different parameters of cloud servers proposed in previous paper, in order to provide efficient storage of data in the cloud. In order to provide more efficiency in this paper dynamic data operations are supported such as updating, deletion and addition of data.


Author(s):  
Вячеслав Вікторович Фролов

The article is devoted to the analysis of modern approaches that ensure the security of cloud services. Since cloud computing is one of the fastest growing areas among information technology, it is extremely important to ensure the safety and reliability of processes occurring in the clouds and to secure the interaction between the client and the provider of cloud services. Given that fears about data loss and their compromise are one of the main reasons that some companies do not transfer their calculations to the clouds. The object of research and analysis of this work are cloud services, which are provided by various cloud service providers. The aim of the study of this work is to compare existing approaches that provide information security for cloud services, as well as offer a new approach based on the principle of diversity. There are many approaches that ensure their safety, using both traditional and cloud-specific. The multi-cloud approach is one of the most promising strategies for improving reliability by reserving cloud resources on the servers of various cloud service providers. It is shown that it is necessary to use diversity to ensure the reliability and safety of critical system components. The principle of diversity is to use a unique version of each resource thanks to a special combination of a cloud computing provider, the geographical location of data centers, cloud service presentation models, and cloud infrastructure deployment models. The differences between cloud providers and which combination of services are preferable to others in terms of productivity are discussed in detail. In addition, best practices for securing cloud resources are reviewed. As a result, this paper concludes that there is a problem of insufficient security and reliability of cloud computing and how to reduce threats in order to avoid a common cause failure and, as a result, loss of confidential data or system downtime using diversity of cloud services.


2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


Sign in / Sign up

Export Citation Format

Share Document