Security of Auditing Protocols Against Subversion Attacks

2020 ◽  
Vol 31 (02) ◽  
pp. 193-206
Author(s):  
Jiaxian Lv ◽  
Yi Wang ◽  
Jinshu Su ◽  
Rongmao Chen ◽  
Wenjun Wu

In 2013, the revelation of Edward Snowden rekindled cryptographic researchers’ interest in subversion attacks. Since then, many works have been carried out to explore the power of subversion attacks and feasible effective countermeasures as well. In this work, we investigate the study of subversion attacks against cloud auditing protocol, which has been well-known as useful primitive for secure cloud storage. We demonstrate that subverted auditing protocol enables the cloud server to recover secret information stored on the data owner side. Particularly, we first define an asymmetric subversion attack model for auditing protocol. This model serves as the principle for analyzing the undetectability and key recovery of subversion attacks against auditing protocols. We then show a general framework of asymmetric subversion attacks against auditing protocols with index-coefficient challenge. To illustrate the feasibility of our paradigm, several concrete auditing protocols are provided. As a feasible countermeasure, we propose a subversion-resilient auditing protocol with index-coefficient challenge.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Haibin Yang ◽  
Zhengge Yi ◽  
Ruifeng Li ◽  
Zheng Tu ◽  
Xu An Wang ◽  
...  

With the advent of data outsourcing, how to efficiently verify the integrity of data stored at an untrusted cloud service provider (CSP) has become a significant problem in cloud storage. In 2019, Guo et al. proposed an outsourced dynamic provable data possession scheme with batch update for secure cloud storage. Although their scheme is very novel, we find that their proposal is not secure in this paper. The malicious cloud server has ability to forge the authentication labels, and thus it can forge or delete the user’s data but still provide a correct data possession proof. Based on the original protocol, we proposed an improved one for the auditing scheme, and our new protocol is effective yet resistant to attacks.


2020 ◽  
Vol 17 (4) ◽  
pp. 1937-1942
Author(s):  
S. Sivasankari ◽  
V. Lavanya ◽  
G. Saranya ◽  
S. Lavanya

These days, Cloud storage is gaining importance among individual and institutional users. Individual and foundations looks for cloud server as a capacity medium to diminish their capacity load under nearby devices. In such storage services, it is necessary to avoid duplicate content/repetitive storage of same data to be avoided. By reducing the duplicate content in cloud storage reduces storage cost. De-duplication is necessary when multiple data owner outsource the same data, issues related to security and ownership to be considered. As the cloud server is always considered to be non trusted, as it is maintained by third party, thus the data stored in cloud is always encrypted and uploaded, thus randomization property of encryption affects de-duplication. It is necessary to propose a serverside de-duplication scheme for handling encrypted data. The proposed scheme allows the cloud server to control access to outsourced data even when the ownership changes dynamically.


2017 ◽  
Vol 129 ◽  
pp. 37-50 ◽  
Author(s):  
Jian Zhang ◽  
Yang Yang ◽  
Yanjiao Chen ◽  
Jing Chen ◽  
Qian Zhang

2020 ◽  
Vol 16 (9) ◽  
pp. 155014772095829
Author(s):  
Changsong Yang ◽  
Yueling Liu ◽  
Xiaoling Tao

With the rapid development of cloud computing, an increasing number of data owners are willing to employ cloud storage service. In cloud storage, the resource-constraint data owners can outsource their large-scale data to the remote cloud server, by which they can greatly reduce local storage overhead and computation cost. Despite plenty of attractive advantages, cloud storage inevitably suffers from some new security challenges due to the separation of outsourced data ownership and its management, such as secure data insertion and deletion. The cloud server may maliciously reserve some data copies and return a wrong deletion result to cheat the data owner. Moreover, it is very difficult for the data owner to securely insert some new data blocks into the outsourced data set. To solve the above two problems, we adopt the primitive of Merkle sum hash tree to design a novel publicly verifiable cloud data deletion scheme, which can also simultaneously achieve provable data storage and dynamic data insertion. Moreover, an interesting property of our proposed scheme is that it can satisfy private and public verifiability without requiring any trusted third party. Furthermore, we formally prove that our proposed scheme not only can achieve the desired security properties, but also can realize the high efficiency and practicality.


2018 ◽  
Vol 7 (2.21) ◽  
pp. 331
Author(s):  
R Anandan ◽  
S Phani Kumar ◽  
K Kalaivani ◽  
P Swaminathan

Cloud based data storage has become a common activity these days. Because cloud storage offers more advantages than normal storage methods those are dynamic access and unlimited storage capabilities for pay and use. But the security of the data outsourced to the cloud is still challenging. The data owner should be capable of performing integrity verification as well as to perform data dynamics of his data stored in the cloud server. Various approaches like cryptographic techniques, proxy based solutions, code based analysis, homomorphic approaches and challenge response algorithms have been proposed. This survey depicts the limitations of the existing approaches and the requirements for a novel and enhanced approach that ensures integrity of the data stored in cloud enabling better performance with reduced complexity.  


2017 ◽  
Vol 3 (11) ◽  
pp. 6 ◽  
Author(s):  
Arshi Jabbar ◽  
Prof. Umesh Lilhore

Cloud storage is one among the service provided by Cloud computing within which information is maintained, managed, secured remotely and created available to users over a network. The user concerning about the integrity of data hold on within the cloud because the user’s data will be attacked or changed by outside attacker. Therefore, a new thought referred to as information auditing is introduced that check the integrity of knowledge with the assistance of an entity referred to as Third Party Auditor (TPA). The aim of this work is to develop an auditing scheme that is secure, economical to use and possess the capabilities like privacy conserving, public auditing, maintaining the information integrity together with confidentiality. It comprises 3 entities: data owner, TPA and cloud server. The data owner performs numerous operations like splitting the file to blocks, encrypting them, generating a hash value for every, concatenating it and generating a signature on that. The TPA performs the main role of knowledge integrity check. It performs activities like generating hash value for encrypted blocks received from cloud server, concatenating them and generates signature on that. It later compares each the signatures to verify whether or not the information stored on cloud is tampered or not. It verifies the integrity of data on demand of the users. To make sure data protection or security of cloud data storage at cloud end, security architecture is designed that secures the data using encryption/decryption algorithm where the proposed algorithm is a hybrid encryption algorithm that uses the concept of EC-RSA, AES algorithm and Blowfish algorithm along with SHA-256 for auditing purpose. Presented experiment results show that the proposed concept is reasonable, it enhancing efficiency about 40% in terms of execution time i.e. encryption as well as decryption time and security and providing confidentiality of cloud data at could end.


2019 ◽  
Vol 15 (10) ◽  
pp. 155014771987899 ◽  
Author(s):  
Changsong Yang ◽  
Xiaoling Tao ◽  
Feng Zhao

With the rapid development of cloud storage, more and more resource-constraint data owners can employ cloud storage services to reduce the heavy local storage overhead. However, the local data owners lose the direct control over their data, and all the operations over the outsourced data, such as data transfer and deletion, will be executed by the remote cloud server. As a result, the data transfer and deletion have become two security issues because the selfish remote cloud server might not honestly execute these operations for economic benefits. In this article, we design a scheme that aims to make the data transfer and the transferred data deletion operations more transparent and publicly verifiable. Our proposed scheme is based on vector commitment (VC), which is used to deal with the problem of public verification during the data transfer and deletion. More specifically, our new scheme can provide the data owner with the ability to verify the data transfer and deletion results. In addition, by using the advantages of VC, our proposed scheme does not require any trusted third party. Finally, we prove that the proposed scheme not only can reach the expected security goals but also can satisfy the efficiency and practicality.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Haibin Yang ◽  
Zhengge Yi ◽  
Xu An Wang ◽  
Yunxuan Su ◽  
Zheng Tu ◽  
...  

Now, it is common for patients and medical institutions to outsource their data to cloud storage. This can greatly reduce the burden of medical information management and storage and improve the efficiency of the entire medical industry. In some cases, the group-based cloud storage system is also very common to be used. For example, in an medical enterprise, the employees outsource the working documents to the cloud storage and share them to the colleagues. However, when the working documents are outsourced to the cloud servers, how to ensure their security is a challenge problem for they are not controlled physically by the data owners. In particular, the integrity of the outsourced data should be guaranteed. And the secure cloud auditing protocol is designed to solve this issue. Recently, a lightweight secure auditing scheme for shared data in cloud storage is proposed. Unfortunately, we find this proposal not secure in this paper. It’s easy for the cloud server to forge the authentication label, and thus they can delete all the outsourced data when the cloud server still provide a correct data possession proof, which invalidates the security of the cloud audit protocol. On the basis of the original security auditing protocol, we provide an improved one for the shared data, roughly analysis its security, and the results show our new protocol is secure.


Author(s):  
Mr. Vaishnav P. Surwase

Abstract: Thus the new auditing scheme has been developed by considering all these requirements. It consist of three entities: data owner, TPA and cloud server. The data owner performs various operations such as splitting the file to blocks, encrypting them, generating a hash value for each, concatenating it and generating a signature on it. The TPA performs the main role of data integrity check. It performs activities like generating hash value for encrypted blocks received from cloud server, concatenating them and generates signature on it. It later compares both the signatures to verify whether the data stored on cloud is tampered or not. It verifies the integrity of data on demand of the users. The cloud server is used only to save the encrypted blocks of data. This proposed auditing scheme make use of AES algorithm for encryption, SHA-2 for integrity check and RSA signature for digital signature calculation. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Basedon our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates Keywords: Cloud computing, big data, data security, authorized auditing, fine-grained dynamic data update


Sign in / Sign up

Export Citation Format

Share Document