scholarly journals Identity-Based Public Auditing Scheme for Cloud Storage with Strong Key-Exposure Resilience

2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
S. Mary Virgil Nithya ◽  
V. Rhymend Uthariaraj

Secured storage system is a critical component in cloud computing. Cloud clients use cloud auditing schemes to verify the integrity of data stored in the cloud. But with the exposure of the auditing secret key to the Cloud Service Provider, cloud auditing becomes unsuccessful, however strong the auditing schemes may be. Therefore, it is essential to prevent the exposure of auditing secret keys, and even if it happens, it is necessary to minimize the damage caused. The existing cloud auditing schemes that are strongly resilient to key exposure are based on Public Key Infrastructure and so have challenges of certificate management/verification. These schemes also incur high computation time during integrity verification of the data blocks. The Identity-based schemes eliminate the usage of certificates but limit the damage due to key exposure, only in time periods earlier to the time period of the exposed key. Some of the key exposure resilient schemes do not provide support for batch auditing. In this paper, an Identity-based Provable Data Possession scheme is proposed. It protects the security of Identity-based cloud storage auditing in time periods both earlier and later to the time period of the exposed key. It also provides support for batch auditing. Analysis shows that the proposed scheme is resistant to the replace attack of the Cloud Service Provider, preserves the data privacy against the Third Party Auditor, and can efficiently verify the correctness of data.

2013 ◽  
Vol 765-767 ◽  
pp. 1630-1635
Author(s):  
Wen Qi Ma ◽  
Qing Bo Wu ◽  
Yu Song Tan

One of differences between cloud storage and previous storage is that there is a financial contract between user and the cloud service provider (CSP). User pay for service in exchange for certain guarantees and the cloud is a liable entity. But some mechanisms need to ensure the liability of CSP. Some work use non-repudiation to realize it. Compared with these non-repudiation schemes, we use third party auditor not client to manage proofs and some metadata, which are security critical data in cloud security. It can provide a more security environment for these data. Against the big overhead in update process of current non-repudiation scheme, we propose three schemes to improve it.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Haibin Yang ◽  
Zhengge Yi ◽  
Ruifeng Li ◽  
Zheng Tu ◽  
Xu An Wang ◽  
...  

With the advent of data outsourcing, how to efficiently verify the integrity of data stored at an untrusted cloud service provider (CSP) has become a significant problem in cloud storage. In 2019, Guo et al. proposed an outsourced dynamic provable data possession scheme with batch update for secure cloud storage. Although their scheme is very novel, we find that their proposal is not secure in this paper. The malicious cloud server has ability to forge the authentication labels, and thus it can forge or delete the user’s data but still provide a correct data possession proof. Based on the original protocol, we proposed an improved one for the auditing scheme, and our new protocol is effective yet resistant to attacks.


2020 ◽  
Vol 8 (5) ◽  
pp. 3135-3141

Public Key Infrastructure (PKI) is a repository and management system for digital certificates. It can be the centralized or decentralized PKI system for issuing, managing, storing, verifying and distributing the key pairs, public key and private key, or one of the public key certificates. In public cloud, Data Owners and Data Users can upload or download their encrypted data along with services, resources and infrastructures in the hands of Cloud Service Provider. It creates the big security concerns in terms of data security and data privacy for the user and Cloud Service Provider is the sole responsibility to provide the Access Control Policy to restrict the cloud services centrally. With the emergence of cloud computing, Public Key Infrastructure (PKI) technology enables the secure communications in between systems. X.509 certificates are based on the centralized PKI and suffers so many issues in the public cloud. Gnu Privacy Guard (GnuPG) certificates are based on the decentralized PKI system. Imagine a world with decentralized PKI system in which each Kerberos is also a Central Authority for issuing certificates to the system or users. This proposed collaborative PKI framework describes the use of PKI in public cloud, proposed algorithm for Kerberos SSO token and provides acquisition of Public Key certificates from the client via Kerberized Central Authorities.


2014 ◽  
Vol 513-517 ◽  
pp. 999-1004 ◽  
Author(s):  
Hong Wei Liu ◽  
Shu Lan Wang ◽  
Peng Zhang

Cloud storage can provide a flexible on-demand data storage service to users anywhere and anytime. However, users data is owned by cloud service provider physically, and the physical boundary between two users data is fuzzy. Moreover, cloud storage provider stores multiple replicas of data in order to increase robustness of users data. The user is charged by the amount of replicas. However, the evidence cloud storage provider actually spends more storage space is scarce. In this environment, a method to ensure multi-replica integrity must be provided. In order to avoid retrieving enormous storage data and users themselves checking, a multi-replica public auditing protocol was proposed based on the BLS short signature scheme and the homomorphic hash function. Based on the computational Diffie-Hellman assumption, the presented protocol is secure against the lost attack and tamper attack from cloud service provider. As the independence among blocks and block signatures, this protocol supports block-level dynamic update, including insertion, modification and deletion. So, the protocol is secure and efficient, and supports for multi-replica, public verification, dynamic update and privacy preserving.


Cloud Computing is well known today on account of enormous measure of data storage and quick access of information over the system. It gives an individual client boundless extra space, accessibility and openness of information whenever at anyplace. Cloud service provider can boost information storage by incorporating data deduplication into cloud storage, despite the fact that information deduplication removes excess information and reproduced information happens in cloud environment. This paper presents a literature survey alongside different deduplication procedures that have been based on cloud information storage. To all the more likely guarantee secure deduplication in cloud, this paper examines file level data deduplication and block level data deduplication.


2018 ◽  
Vol 6 (5) ◽  
pp. 340-345
Author(s):  
Rajat Pugaliya ◽  
Madhu B R

Cloud Computing is an emerging field in the IT industry. Cloud computing provides computing services over the Internet. Cloud Computing demand increasing drastically, which has enforced cloud service provider to ensure proper resource utilization with less cost and less energy consumption. In recent time various consolidation problems found in cloud computing like the task, VM, and server consolidation. These consolidation problems become challenging for resource utilization in cloud computing. We found in the literature review that there is a high level of coupling in resource utilization, cost, and energy consumption. The main challenge for cloud service provider is to maximize the resource utilization, reduce the cost and minimize the energy consumption. The dynamic task consolidation of virtual machines can be a way to solve the problem. This paper presents the comparative study of various task consolidation algorithms.


Cloud service provider in cloud environment will provide or provision resource based on demand from the user. The cloud service provider (CSP) will provide resources as and when required or demanded by the user for execution of the job on the cloud environment. The CSP will perform this in a static and dynamic manner. The CSP should also consider various other factors in order to provide the resources to the user, the prime among that will be the Service Level Agreement (SLA), which is normally signed by the user and cloud service provider during the inception phase of service. There are many algorithm which are used in order to allocate resources to the user in cloud environment. The algorithm which is proposed will be used to reduce the amount of energy utilized in performing various job execution in cloud environment. Here the energy utilized for execution of various jobs are taken into account by increasing the number of virtual machines that are used on a single physical host system. There is no thumb rule to calculate the number of virtual machines to be executed on a single host. The same can be derived by calculating the amount of space, speed required along with the time to execute the job on a virtual machine. Based up on this we can derive the number of Virtual machine on a single host system. There can be 10 virtual machines on a single system or even 20 number of virtual machines on single physical system. But if the same is calculated by the equation then the result will be exactly matching with the threshold capacity of the physical system[1]. If more number of physical systems are used to execute fewer virtual machines on each then the amount of energy consumed will be very high. So in order to reduce the energy consumption , the algorithm can be used will not only will help to calculate the number of virtual machines on single physical system , but also will help to reduce the energy as less number of physical systems will be in need[2].


Sign in / Sign up

Export Citation Format

Share Document