scholarly journals A Three-Layer Privacy Preserving Cloud Storage Scheme Based on Computational Intelligence in Fog Computing

Late years witness the improvement of distributed computing innovation. With the hazardous development of unstructured information, distributed storage innovation improves advancement. Notwithstanding, in current stockpiling pattern, client's information is completely put away in cloud servers. At the end of the day, clients lose their privilege of control on information and face security spillage chance. Conventional security assurance plans are normally founded on encryption innovation, yet these sorts of strategies can't viably oppose assault from within cloud server. So as to take care of this issue, we propose a three-layer stockpiling structure dependent on mist registering. The proposed structure can both exploit distributed storage and secure the protection of information. By at that point, we can place a little piece of information in neighborhood machine and mist server so as to ensure the security. In like manner, in context on computational information, this tally can figure the assignment degree put aside in cloud, darkness, and near to machine, autonomously. Through the theoretical security appraisal and primer assessment, the good judgment of our course of action has been supported, which is actually a historic Added to current dispersed amassing plot.

With the dangerous development of unstructured data, distributed storage innovation gets a great deal of consideration and higher advancement. Nonetheless, in current stockpiling pattern, client's data is totally held in cloud servers. In various words, clients lose their privilege of the executives on data and face security departure hazard. Old security assurance plans square measure now and again upheld encoding innovation, anyway these assortments of systems can't successfully oppose assault from the cloud server. To determine this downside, have a will in general propose a three-layer stockpiling system upheld haze figuring. The arranged system will each exploit distributed storage and shield the protection of information. Moreover, Hash-Solomon code equation is intended to isolate data into totally various components. At that point, we can put a little low a piece of data in local machine and mist server to shield the security. In addition, upheld process insight, this equation will figure the appropriation extent held in cloud, mist and local machine, severally. Through the hypothetical wellbeing examination and exploratory investigation, the practicality of our subject has been substantial, that is fundamentally a hearty enhancement to existing distributed storage topic is the observer of the ongoing years distributed computing innovation


2020 ◽  
Vol 8 (6) ◽  
pp. 4129-4134

Cloud Computing proves to be most predominant innovative field in the area of Information technology. Cloud is best suited for small scale to large scale businesses and personal purposes such as storing, computing, managing data & resources, running applications and many more. Due to increasing large volumes of data over cloud servers created subsequent specific issues like data maintainability, network elasticity, managing Internet of Things (I.o.T’s) devices and many more. Recent progresses in Technology are given rise to fog computing or decentralized cloud to overcome cloud server issues called fog nodes. In this paper we present a brief note on how cloud issues can overcome using fog nodes benefits along with elaboration of load balancing factor. To maintain load balancing of fog nodes no much appreciable work took place in the field of fog computing. This paper proposes a scheduler which receives the devices in to a Job Queue to be connected over cloud. To apply scheduling algorithms like F.C.F.S, S.J.F, P.S, R.R and W.R.R. over fog nodes will be discussed along with their merits & demerits. At last we try to compare the various parameters of load balancing among various scheduling algorithms. In this paper we focus on how fog nodes perform functions like considerable storages, low latency, heterogeneity, allocation & interaction with limited IoT devices and Security along with architecture cloud to fog. During allocation of IoT devices to various fog nodes we will come across a serious issues i.e load balancing on fog nodes. Our detailed study presents the comparison of above mentioned scheduling algorithms load balancing factors such as rich resources allocations & Balancing among fog nodes, Identification of devices, Authentication of fog nodes, bandwidth consumption, location awareness, response time, cost maintenances, Intrusion detection, fault forbearances and maintainability.


Cloud storage services are quickly increasing and more prevalent. CSP-cloud storage providers offer storage as a service to all the users. It is a paid facility that allows association to outsource their confidential data to be stored on remote servers. But, identity privacy and preserving data from an untrusted cloud is a difficult concern, because of the successive change of the members. CSP has to be secured from an illegitimate person who performs data corruption over cloud servers. Thus, there is a need to safeguard information from the individuals who don’t have access by establishing numerous cloud storage encryption plans. Every such plan implemented expects that distributed storage suppliers are protected and can't be hacked; however, practically speaking, a few powers will compel distributed storage suppliers to render client details and secret information on the cloud, in this manner inside and out bypassing stockpiling encryption plans. In this paper, a new scheme is introduced to protect user privacy by a deniable CP_ABE(Cloud Provider_ Attribute Based Encryption) scheme which implements a cloud storage encryption plan. Since coercers cannot specify whether privileged insights are valid or not, the CSP ensures privacy of the user


Fog computing is one of the most latest technology used by the cloud providers to safe guard the user data and service provider’s data servers. Fog computing acts as mediator between hardware and remote servers or cloud servers. Cloud computing still has the lot of vulnerabilities. Privacy to the users data is main issue in the present cloud computing. Whenever users uploads data into cloud server then user will lose their right on their own data because users don’t know about, what cloud providers do with users data, they can sell the users data for their own profit without knowing to users. Fog computing provides lot of services like operation of computer, storage and networking services between users and cloud computing data centers. With the networking services users can lose their data privacy or leakage without knowing to user. Because public clouds are not secure enough and users doesn’t know where data is storing in cloud servers. Breaking the data into small parts can lead to loss of data and which it can create way to attackers to steal data. Even data might be changed instated of one data with another. Intelligence can be applied in the fog computing technology to use of computing resources and security reasons. Applying multiple layers of security features by using kubernets can improve better service to user and user’s data can be safe from the attackers. Whenever user lost connection with the server kubernets establishes reconnection between user and server. RSA256 encryption is applied to users data with this we can provide better security between cloud server and users.


2020 ◽  
Vol 2020 ◽  
pp. 1-16 ◽  
Author(s):  
Jing Liu ◽  
Changbo Yuan ◽  
Yingxu Lai ◽  
Hua Qin

Industrial Internet technology has developed rapidly, and the security of industrial data has received much attention. At present, industrial enterprises lack a safe and professional data security system. Thus, industries urgently need a complete and effective data protection scheme. This study develops a three-layer framework with local/fog/cloud storage for protecting sensitive industrial data and defines a threat model. For real-time sensitive industrial data, we use the improved local differential privacy algorithm M-RAPPOR to perturb sensitive information. We encode the desensitized data using Reed–Solomon (RS) encoding and then store them in local equipment to realize low cost, high efficiency, and intelligent data protection. For non-real-time sensitive industrial data, we adopt a cloud-fog collaborative storage scheme based on AES-RS encoding to invisibly provide multilayer protection. We adopt the optimal solution of distributed storage in local equipment and the cloud-fog collaborative storage scheme in fog nodes and cloud nodes to alleviate the storage pressure on local equipment and to improve security and recoverability. According to the defined threat model, we conduct a security analysis and prove that the proposed scheme can provide stronger data protection for sensitive data. Compared with traditional methods, this approach strengthens the protection of sensitive information and ensures real-time continuity of open data sharing. Finally, the feasibility of our scheme is validated through experimental evaluation.


2021 ◽  
Vol 336 ◽  
pp. 08003
Author(s):  
Zhijian Qin ◽  
Lin Huo ◽  
Shicong Zhang

Data integrity validation is considered to be an important tool to solve the problem that cloud subscribers cannot accurately know whether there are non-subjective changes in the data they upload to cloud servers. In this paper, a data integrity verification model based on dynamic successor tree index structure, Bloom filter and Merkle tree is proposed. The block labels generated according to the features of the dynamic successor tree index structure can sense whether changes have been made to the user's data, while the Merkle tree can track the cha*nged data blocks, enabling the user to effectively verify the integrity of the data stored in the cloud server and provide more effective protection for data.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Haibin Yang ◽  
Zhengge Yi ◽  
Xu An Wang ◽  
Yunxuan Su ◽  
Zheng Tu ◽  
...  

Now, it is common for patients and medical institutions to outsource their data to cloud storage. This can greatly reduce the burden of medical information management and storage and improve the efficiency of the entire medical industry. In some cases, the group-based cloud storage system is also very common to be used. For example, in an medical enterprise, the employees outsource the working documents to the cloud storage and share them to the colleagues. However, when the working documents are outsourced to the cloud servers, how to ensure their security is a challenge problem for they are not controlled physically by the data owners. In particular, the integrity of the outsourced data should be guaranteed. And the secure cloud auditing protocol is designed to solve this issue. Recently, a lightweight secure auditing scheme for shared data in cloud storage is proposed. Unfortunately, we find this proposal not secure in this paper. It’s easy for the cloud server to forge the authentication label, and thus they can delete all the outsourced data when the cloud server still provide a correct data possession proof, which invalidates the security of the cloud audit protocol. On the basis of the original security auditing protocol, we provide an improved one for the shared data, roughly analysis its security, and the results show our new protocol is secure.


Sign in / Sign up

Export Citation Format

Share Document