scholarly journals Assure deletion supporting dynamic insertion for outsourced data in cloud computing

2020 ◽  
Vol 16 (9) ◽  
pp. 155014772095829
Author(s):  
Changsong Yang ◽  
Yueling Liu ◽  
Xiaoling Tao

With the rapid development of cloud computing, an increasing number of data owners are willing to employ cloud storage service. In cloud storage, the resource-constraint data owners can outsource their large-scale data to the remote cloud server, by which they can greatly reduce local storage overhead and computation cost. Despite plenty of attractive advantages, cloud storage inevitably suffers from some new security challenges due to the separation of outsourced data ownership and its management, such as secure data insertion and deletion. The cloud server may maliciously reserve some data copies and return a wrong deletion result to cheat the data owner. Moreover, it is very difficult for the data owner to securely insert some new data blocks into the outsourced data set. To solve the above two problems, we adopt the primitive of Merkle sum hash tree to design a novel publicly verifiable cloud data deletion scheme, which can also simultaneously achieve provable data storage and dynamic data insertion. Moreover, an interesting property of our proposed scheme is that it can satisfy private and public verifiability without requiring any trusted third party. Furthermore, we formally prove that our proposed scheme not only can achieve the desired security properties, but also can realize the high efficiency and practicality.

Author(s):  
Fangfang Shan ◽  
Hui Li ◽  
Fenghua Li ◽  
Yunchuan Guo ◽  
Jinbo Xiong

With the rapid development of cloud computing, it has been increasingly attractive for individuals and groups to store and share data via cloud storage. Once stored in the third-party cloud storage service providers, the privacy and integrity of outsourced data should be attached with more attention as a challenging task. This article presents the attribute-based assured deletion scheme (AADS) which aims to protect and assuredly delete outsourced data in cloud computing. It encrypts outsourced data files with standard cryptographic techniques to guarantee the privacy and integrity, and assuredly deletes data upon revocations of attributes. AADS could be applied to solve important security problems by supporting fine-grained attribute-based policies and their combinations. According to the comparison and analysis, AADS provides efficient data encryption and flexible attribute-based assured deletion for cloud-stored data with an acceptable concession in performance cost.


The rapid development in information technology has rendered an increase in the data volume at a speed which is surprising. In recent times, cloud computing and the Internet of Things (IoT) have become the hottest among the topics in the industry of information technology. There are many advantages to Cloud computing such as scalability, low price, and large scale and the primary technique of the IoTs like the Radio-Frequency Identification (RFID) have been applied to a large scale. In the recent times, the users of cloud storage have been increasing to a great extent and the reason behind this was the cloud storage system bringing down the issues in maintenance and also has a low amount of storage when compared to other methods. This system provides a high degree of reliability and availability where redundancy is introduced to the systems. In the replicated systems, objects get to be copied many times and every copy resides in a different location found in distributed computing. So, replication of data has been posing some threat to the cloud storage for users and also for the providers since it has been a major challenge providing efficient storage of data. So, the work has been analysing different strategies of replication of data and have pointed out several issues that are affected by this. For the purpose of this work, replication of data has been presented by employing the Cuckoo Search (CS) and the Greedy Search. The research is proceeding in a direction to reduce the replications without any adverse effect on the reliability and the availability of data.


2020 ◽  
Vol 17 (4) ◽  
pp. 1937-1942
Author(s):  
S. Sivasankari ◽  
V. Lavanya ◽  
G. Saranya ◽  
S. Lavanya

These days, Cloud storage is gaining importance among individual and institutional users. Individual and foundations looks for cloud server as a capacity medium to diminish their capacity load under nearby devices. In such storage services, it is necessary to avoid duplicate content/repetitive storage of same data to be avoided. By reducing the duplicate content in cloud storage reduces storage cost. De-duplication is necessary when multiple data owner outsource the same data, issues related to security and ownership to be considered. As the cloud server is always considered to be non trusted, as it is maintained by third party, thus the data stored in cloud is always encrypted and uploaded, thus randomization property of encryption affects de-duplication. It is necessary to propose a serverside de-duplication scheme for handling encrypted data. The proposed scheme allows the cloud server to control access to outsourced data even when the ownership changes dynamically.


2019 ◽  
Vol 15 (10) ◽  
pp. 155014771987899 ◽  
Author(s):  
Changsong Yang ◽  
Xiaoling Tao ◽  
Feng Zhao

With the rapid development of cloud storage, more and more resource-constraint data owners can employ cloud storage services to reduce the heavy local storage overhead. However, the local data owners lose the direct control over their data, and all the operations over the outsourced data, such as data transfer and deletion, will be executed by the remote cloud server. As a result, the data transfer and deletion have become two security issues because the selfish remote cloud server might not honestly execute these operations for economic benefits. In this article, we design a scheme that aims to make the data transfer and the transferred data deletion operations more transparent and publicly verifiable. Our proposed scheme is based on vector commitment (VC), which is used to deal with the problem of public verification during the data transfer and deletion. More specifically, our new scheme can provide the data owner with the ability to verify the data transfer and deletion results. In addition, by using the advantages of VC, our proposed scheme does not require any trusted third party. Finally, we prove that the proposed scheme not only can reach the expected security goals but also can satisfy the efficiency and practicality.


Author(s):  
Deepika. N ◽  
Durga. P ◽  
Gayathri. N ◽  
Murugesan. M

The cloud security is one of the essential roles in cloud, here we can preserve our data into cloud storage. More and more clients would like to keep their data to PCS (public cloud servers) along with the rapid development of cloud computing. Cloud storage services allow users to outsource their data to cloud servers to save local data storage costs. Multiple verification tasks from different users can be performed efficiently by the auditor and the cloud-stored data can be updated dynamically. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. In our system we are using the own auditing based on the token generation. Using this key generation technique compare the key values from original keys we can find out the changes about the file. A novel public verification scheme for cloud storage using in distinguishability obfuscation, which requires a lightweight computation on the auditor and delegate most computation to the cloud. Not only stored also the content will be encrypted in the cloud server. If anyone try to hack at the cloud end is not possible to break the two different blocks. The security of our scheme under the strongest security model. They need first decrypt the files and also combine the splitted files from three different locations. This is not possible by anyone. Anyone can download the files from the server with file holder permission. At the time of download key generated (code based key generation) and it will send to the file owner. We can download the file need to use the key for authentication and some other users want to download file owner permission is necessary.


Author(s):  
Neha Thakur ◽  
Aman Kumar Sharma

Cloud computing has been envisioned as the definite and concerning solution to the rising storage costs of IT Enterprises. There are many cloud computing initiatives from IT giants such as Google, Amazon, Microsoft, IBM. Integrity monitoring is essential in cloud storage for the same reasons that data integrity is critical for any data centre. Data integrity is defined as the accuracy and consistency of stored data, in absence of any alteration to the data between two updates of a file or record.  In order to ensure the integrity and availability of data in Cloud and enforce the quality of cloud storage service, efficient methods that enable on-demand data correctness verification on behalf of cloud users have to be designed. To overcome data integrity problem, many techniques are proposed under different systems and security models. This paper will focus on some of the integrity proving techniques in detail along with their advantages and disadvantages.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2020 ◽  
Vol 4 (3) ◽  
pp. 23
Author(s):  
Ke Wang ◽  
Michael Zipperle ◽  
Marius Becherer ◽  
Florian Gottwalt ◽  
Yu Zhang

Compliance management for procurement internal auditing has been a major challenge for public sectors due to its tedious period of manual audit history and large-scale paper-based repositories. Many practical issues and potential risks arise during the manual audit process, including a low level of efficiency, accuracy, accountability, high expense and its laborious and time consuming nature. To alleviate these problems, this paper proposes a continuous compliance awareness framework (CoCAF). It is defined as an AI-based automated approach to conduct procurement compliance auditing. CoCAF is used to automatically and timely audit an organisation’s purchases by intelligently understanding compliance policies and extracting the required information from purchasing evidence using text extraction technologies, automatic processing methods and a report rating system. Based on the auditing results, the CoCAF can provide a continuously updated report demonstrating the compliance level of the procurement with statistics and diagrams. The CoCAF is evaluated on a real-life procurement data set, and results show that it can process 500 purchasing pieces of evidence within five minutes and provide 95.6% auditing accuracy, demonstrating its high efficiency, quality and assurance level in procurement internal audit.


2014 ◽  
Vol 610 ◽  
pp. 695-698
Author(s):  
Qian Tao ◽  
Bo Pan ◽  
Wen Quan Cui

In recent years, the rapid development of cloud computing brings significant innovation in the whole IT industry. For the local tasks scheduling on each computational node of the top model of weapon network, an open task scheduling framework was introduced a task accept control scheme based on the tasks based on load balancing, quality of service (QoS) and an improved constant bandwidth server algorithm was presented. The result of simulation shows that the scheduling policies can improve the schedule speed when the number of tasks increases and can meet the demand better for the real time requirementsof the tactical training evaluation system for complexity and Large-scale.


Sign in / Sign up

Export Citation Format

Share Document