scholarly journals Survey on Data Integrity, Recovery, and Proof of Retrievability Techniques in Cloud Storage

2018 ◽  
Vol 7 (3.6) ◽  
pp. 55
Author(s):  
Neha Narayan Kulkarni ◽  
Shital Kumar A. Jain ◽  
. .

Recently the technologies are growing fast, so they have become the point of source and also the sink for data. Data is generated in large volume introducing the concept of structured and unstructured data evolving "Big Data" which needs large memory for storage. There are two possible solutions either increase the local storage or use the Cloud Storage. Cloud makes data available to the user anytime, anywhere, anything. Cloud allows the user to store their data virtually without investing much. However, this data is on cloud raising a concern of data security and recovery. This attack is made by the untrusted or unauthorized user remotely. The attacker may modify, delete or replace the data. Therefore, different models are proposed for a data integrity check and proof of retrievability. This paper includes the literature review related to various techniques for data integrity, data recovery and proof of retrievability.  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lin Yang

In recent years, people have paid more and more attention to cloud data. However, because users do not have absolute control over the data stored on the cloud server, it is necessary for the cloud storage server to provide evidence that the data are completely saved to maintain their control over the data. Give users all management rights, users can independently install operating systems and applications and can choose self-service platforms and various remote management tools to manage and control the host according to personal habits. This paper mainly introduces the cloud data integrity verification algorithm of sustainable computing accounting informatization and studies the advantages and disadvantages of the existing data integrity proof mechanism and the new requirements under the cloud storage environment. In this paper, an LBT-based big data integrity proof mechanism is proposed, which introduces a multibranch path tree as the data structure used in the data integrity proof mechanism and proposes a multibranch path structure with rank and data integrity detection algorithm. In this paper, the proposed data integrity verification algorithm and two other integrity verification algorithms are used for simulation experiments. The experimental results show that the proposed scheme is about 10% better than scheme 1 and about 5% better than scheme 2 in computing time of 500 data blocks; in the change of operation data block time, the execution time of scheme 1 and scheme 2 increases with the increase of data blocks. The execution time of the proposed scheme remains unchanged, and the computational cost of the proposed scheme is also better than that of scheme 1 and scheme 2. The scheme in this paper not only can verify the integrity of cloud storage data but also has certain verification advantages, which has a certain significance in the application of big data integrity verification.


In the cryptocurrency era, Blockchain is one of the expeditiously growing information technologies that help in providing security to the data. Data tampering and authentication problems generally occur in centralized servers while sharing and storing the data. Blockchain provides the platform for big data and cloud storage in enhancing the security by evading from pernicious users. In this paper, we have discussed the exhaustive description of blockchain and its need, features and applications. Analysis of blockchain is done for different domains such as big data, cloud, internet of things and mobile cloud where the differences V’s are compared with big data and blockchain. SWOT (Strength Weakness Opportunities Threats) analysis is performed to address the merits and limitations in blockchain technology. The survey in aspects of data security, data storage, data sharing and data authentication through blockchain technology is done and the challenges are discussed to overcome the problem that leads in big data and cloud storage. The detailed comparative analysis proves that the blockchain technology overcomes the problems in big data storage and data security in cloud.


2020 ◽  
Vol 5 (3) ◽  
pp. 172-177
Author(s):  
Mislav Radic ◽  
Tracy M Frech

Since it was first used in 1997, the term “big data” has been popularized; however, the concept of big data is relatively new to medicine. Big data refers to a method and technique to systematically retrieve, collect, manage, and analyze very large and complex sets of structured and unstructured data that cannot be sufficiently processed using traditional methods of processing data. Integrating big data in rare diseases with low prevalence and incidence, like systemic sclerosis is of particular importance. We conducted a literature review of use of big data in systemic sclerosis. The volume of data on systemic sclerosis has grown steadily in the recent years; however, big data methods have not been readily used. This inexhaustible source of data needs to be used more to unleash its full potential.


2020 ◽  
Author(s):  
Anil Kumar G

Cloud computing offers different kind of servicesto the end user, so that end user can store the data and access iton demand where ever they need it. Now days IT industries areoutsourcing their data by storing it remotely in cloud forreducing the load on local storage there by reducing thehardware, software and maintenance cost. In spite of thesebenefits, the major problem with data storage is that there is noguarantee of data consistency and integrity, which has become amajor hurdle for using the services offered by cloud. So thispaper surveys existing schemes that proves data integrity andretrievability.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1990
Author(s):  
Khalil Ahmad Alsulbi ◽  
Maher Ali Khemakhem ◽  
Abdullah Ahamd Basuhail ◽  
Fathy Eassa Eassa ◽  
Kamal Mansur Jambi ◽  
...  

The sum of Big Data generated from different sources is increasing significantly with each passing day to extent that it is becoming challenging for traditional storage methods to store this massive amount of data. For this reason, most organizations have resolved to use third-party cloud storage to store data. Cloud storage has advanced in recent times, but it still faces numerous challenges with regard to security and privacy. This paper discusses Big Data security and privacy challenges and the minimum requirements that must be provided by future solutions. The main objective of this paper is to propose a new technical framework to control and manage Big Data security and privacy risks. A design science research methodology is used to carry out this project. The proposed framework takes advantage of Blockchain technology to provide secure storage of Big Data by managing its metadata and policies and eliminating external parties to maintain data security and privacy. Additionally, it uses mobile agent technology to take advantage of the benefits related to system performance in general. We present a prototype implementation for our proposed framework using the Ethereum Blockchain in a real data storage scenario. The empirical results and framework evaluation show that our proposed framework provides an effective solution for secure data storage in a Big Data environment.


2020 ◽  
Vol 12 (2) ◽  
pp. 634 ◽  
Author(s):  
Diana Martinez-Mosquera ◽  
Rosa Navarrete ◽  
Sergio Lujan-Mora

The work presented in this paper is motivated by the acknowledgement that a complete and updated systematic literature review (SLR) that consolidates all the research efforts for Big Data modeling and management is missing. This study answers three research questions. The first question is how the number of published papers about Big Data modeling and management has evolved over time. The second question is whether the research is focused on semi-structured and/or unstructured data and what techniques are applied. Finally, the third question determines what trends and gaps exist according to three key concepts: the data source, the modeling and the database. As result, 36 studies, collected from the most important scientific digital libraries and covering the period between 2010 and 2019, were deemed relevant. Moreover, we present a complete bibliometric analysis in order to provide detailed information about the authors and the publication data in a single document. This SLR reveal very interesting facts. For instance, Entity Relationship and document-oriented are the most researched models at the conceptual and logical abstraction level respectively and MongoDB is the most frequent implementation at the physical. Furthermore, 2.78% studies have proposed approaches oriented to hybrid databases with a real case for structured, semi-structured and unstructured data.


Sign in / Sign up

Export Citation Format

Share Document