scholarly journals An Enhanced Data Storage Technique on Cloud Computing

Author(s):  
Dr. Kamlesh Sharma ◽  
◽  
Nidhi Garg ◽  

Exercising a collection of similar numerous easy to get sources and resources over the internet is termed as Cloud Computing A Cloud storage system is basically a storage system over a large scale that consist of many independent storage servers. During recent years a huge changes and adoption is seen in cloud computing so security has become one of the major concerns in it. As Cloud computing works on third party system so security concern is there not only for customers but also for service providers. In this paper we have discussed about Cryptography i.e., encrypting messages into certain forms, it’s algorithms including symmetric and asymmetric algorithm and hashing, its architecture, and advantages of cryptography.

Author(s):  
Kamlesh Sharma* ◽  
Nidhi Garg

Exercising a collection of similar numerous easy to get sources and resources over the internet is termed as Cloud Computing A Cloud storage system is basically a storage system over a large scale that consist of many independent storage servers. During recent years a huge changes and adoption is seen in cloud computing so security has become one of the major concerns in it. As Cloud computing works on third party system so security concern is there not only for customers but also for service providers. In this paper we have discussed about Cryptography i.e., encrypting messages into certain forms, it’s algorithms including symmetric and asymmetric algorithm and hashing, its architecture, and advantages of cryptography.


Author(s):  
R.Santha Maria Rani ◽  
Dr.Lata Ragha

Cloud computing provides elastic computing and storage resource to users. Because of the characteristic the data is not under user’s control, data security in cloud computing is becoming one of the most concerns in using cloud computing resources. To improve data reliability and availability, Public data auditing schemes is used to verify the outsourced data storage without retrieving the whole data. However, users may not fully trust the cloud service providers (CSPs) because sometimes they might be dishonest. Therefore, to maintain the integrity of cloud data, many auditing schemes have been proposed. In this paper, analysis of various existing auditing schemes with their consequences is discussed.  Keywords: — Third Party Auditor (TPA), Cloud Service Provider (CSP), Merkle-Hash Tree (MHT), Provable data Possession (PDP), Dynamic Hash Table (DHT).


Repositor ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 165
Author(s):  
Miftakhudin Kusuma Wijaya ◽  
Zamah Sari ◽  
Mahar Faiqurahman

ABSTRAKCloud storage merupakan salah satu bentuk dari cloud computing yang terpusat pada media penyimpanan data. Dalam cloud storage ada kemungkinan yang di alami seperti kehilangan data dengan sekala kecil maupun besar, ataupun tidak dapat di akses sama sekali. Kesalahan yang terjadi akibat bencana alam, kesalahan manusia, ataupun alat yang cukup usang. Dari permasalahan tersebut di lakukan backup dan sinkronisasi data untuk menanggulangi permasalahan yang ada. Replikasi adalah proses yang di gunakan untuk menyalin atau mendistribusikan data dari penyedia layanan ke perangkat backup. Replikasi yang di gunakan ada dua yaitu replikasi database MySql dan replikasi data Rsync, dan untuk menjaga agar cloud storage tetap menyediakan sumber daya untuk pengguna dengan menambahkan metode failover. Failover adalah peralihan dari sebuah perangkat penyedia layanan ke perangkat backup ketika mengalami permasalahan yang tidak di inginkan. Pada penelitian ini akan menjelaskan bagaimana cara membangun dan mengimplementasi infrastruktur cloud storage dengan replikasi untuk backup juga sinkronisasi data dan failover untuk memberikan ketersediaan sumber daya layanan untuk pengguna secara realtime.ABSTRACTCloud storage is one form of cloud computing but is centered on data storage media. In cloud storage there is the possibility of data loss with small or large scale or can not be accessed at all. It can happen from natural disasters, human error, or device oldness. From these problems we can do data backup and data synchronization to overcome these problems. Replication is a process that used to copy or distribute data from service providers to backup devices. This replication using two ways, there is replication database using MySql and data replication using Rsync. To keep cloud storage provide resource for user by adding failover. This failover is a transition from a service provider device when having problems to backup device. In this study will explain how to build and implementation cloud storage infrastructure with replication for backup as well as data synchronization and failover to provide real-time availability of service resources for users.


2020 ◽  
Vol 17 (9) ◽  
pp. 4411-4418
Author(s):  
S. Jagannatha ◽  
B. N. Tulasimala

In the world of information communication technology (ICT) the term Cloud Computing has been the buzz word. Cloud computing is changing its definition the way technocrats are using it according to the environment. Cloud computing as a definition remains very contentious. Definition is stated liable to a particular application with no unanimous definition, making it altogether elusive. In spite of this, it is this technology which is revolutionizing the traditional usage of computer hardware, software, data storage media, processing mechanism with more of benefits to the stake holders. In the past, the use of autonomous computers and the nodes that were interconnected forming the computer networks with shared software resources had minimized the cost on hardware and also on the software to certain extent. Thus evolutionary changes in computing technology over a few decades has brought in the platform and environment changes in machine architecture, operating system, network connectivity and application workload. This has made the commercial use of technology more predominant. Instead of centralized systems, parallel and distributed systems will be more preferred to solve computational problems in the business domain. These hardware are ideal to solve large-scale problems over internet. This computing model is data-intensive and networkcentric. Most of the organizations with ICT used to feel storing of huge data, maintaining, processing of the same and communication through internet for automating the entire process a challenge. In this paper we explore the growth of CC technology over several years. How high performance computing systems and high throughput computing systems enhance computational performance and also how cloud computing technology according to various experts, scientific community and also the service providers is going to be more cost effective through different dimensions of business aspects.


2021 ◽  
Vol 251 ◽  
pp. 02023
Author(s):  
Maria Arsuaga-Rios ◽  
Vladimír Bahyl ◽  
Manuel Batalha ◽  
Cédric Caffy ◽  
Eric Cano ◽  
...  

The CERN IT Storage Group ensures the symbiotic development and operations of storage and data transfer services for all CERN physics data, in particular the data generated by the four LHC experiments (ALICE, ATLAS, CMS and LHCb). In order to accomplish the objectives of the next run of the LHC (Run-3), the Storage Group has undertaken a thorough analysis of the experiments’ requirements, matching them to the appropriate storage and data transfer solutions, and undergoing a rigorous programme of testing to identify and solve any issues before the start of Run-3. In this paper, we present the main challenges presented by each of the four LHC experiments. We describe their workflows, in particular how they communicate with and use the key components provided by the Storage Group: the EOS disk storage system; its archival back-end, the CERN Tape Archive (CTA); and the File Transfer Service (FTS). We also describe the validation and commissioning tests that have been undertaken and challenges overcome: the ATLAS stress tests to push their DAQ system to its limits; the CMS migration from PhEDEx to Rucio, followed by large-scale tests between EOS and CTA with the new FTS “archive monitoring” feature; the LHCb Tier-0 to Tier-1 staging tests and XRootD Third Party Copy (TPC) validation; and the erasure coding performance in ALICE.


2014 ◽  
Vol 13 (7) ◽  
pp. 4625-4632
Author(s):  
Jyh-Shyan Lin ◽  
Kuo-Hsiung Liao ◽  
Chao-Hsing Hsu

Cloud computing and cloud data storage have become important applications on the Internet. An important trend in cloud computing and cloud data storage is group collaboration since it is a great inducement for an entity to use a cloud service, especially for an international enterprise. In this paper we propose a cloud data storage scheme with some protocols to support group collaboration. A group of users can operate on a set of data collaboratively with dynamic data update supported. Every member of the group can access, update and verify the data independently. The verification can also be authorized to a third-party auditor for convenience.


2013 ◽  
Vol 765-767 ◽  
pp. 1087-1091
Author(s):  
Hong Lin ◽  
Shou Gang Chen ◽  
Bao Hui Wang

Recently, with the development of Internet and the coming of new application modes, data storage has some new characters and new requirements. In this paper, a Distributed Computing Framework Mass Small File storage System (For short:Dnet FS) based on Windows Communication Foundation in .Net platform is presented, which is lightweight, good-expansibility, running in cheap hardware platform, supporting Large-scale concurrent access, and having certain fault-tolerance. The framework of this system is analyzed and the performance of this system is tested and compared. All of these prove this system meet requirements.


Author(s):  
Olexander Melnikov ◽  
◽  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Viktor Kosenko ◽  
...  

The article considers the development and implementation of cloud services in the work of government agencies. The classification of the choice of cloud service providers is offered, which can serve as a basis for decision making. The basics of cloud computing technology are analyzed. The COVID-19 pandemic has identified the benefits of cloud services in remote work Government agencies at all levels need to move to cloud infrastructure. Analyze the prospects of cloud computing in Ukraine as the basis of e-governance in development. This is necessary for the rapid provision of quality services, flexible, large-scale and economical technological base. The transfer of electronic information interaction in the cloud makes it possible to attract a wide range of users with relatively low material costs. Automation of processes and their transfer to the cloud environment make it possible to speed up the process of providing services, as well as provide citizens with minimal time to obtain certain information. The article also lists the risks that exist in the transition to cloud services and the shortcomings that may arise in the process of using them.


2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


Sign in / Sign up

Export Citation Format

Share Document