scholarly journals A Three Layer Privacy Protective Cloud Storage Theme Supported Procedure Intelligence in Fog Computing

With the dangerous development of unstructured data, distributed storage innovation gets a great deal of consideration and higher advancement. Nonetheless, in current stockpiling pattern, client's data is totally held in cloud servers. In various words, clients lose their privilege of the executives on data and face security departure hazard. Old security assurance plans square measure now and again upheld encoding innovation, anyway these assortments of systems can't successfully oppose assault from the cloud server. To determine this downside, have a will in general propose a three-layer stockpiling system upheld haze figuring. The arranged system will each exploit distributed storage and shield the protection of information. Moreover, Hash-Solomon code equation is intended to isolate data into totally various components. At that point, we can put a little low a piece of data in local machine and mist server to shield the security. In addition, upheld process insight, this equation will figure the appropriation extent held in cloud, mist and local machine, severally. Through the hypothetical wellbeing examination and exploratory investigation, the practicality of our subject has been substantial, that is fundamentally a hearty enhancement to existing distributed storage topic is the observer of the ongoing years distributed computing innovation

Late years witness the improvement of distributed computing innovation. With the hazardous development of unstructured information, distributed storage innovation improves advancement. Notwithstanding, in current stockpiling pattern, client's information is completely put away in cloud servers. At the end of the day, clients lose their privilege of control on information and face security spillage chance. Conventional security assurance plans are normally founded on encryption innovation, yet these sorts of strategies can't viably oppose assault from within cloud server. So as to take care of this issue, we propose a three-layer stockpiling structure dependent on mist registering. The proposed structure can both exploit distributed storage and secure the protection of information. By at that point, we can place a little piece of information in neighborhood machine and mist server so as to ensure the security. In like manner, in context on computational information, this tally can figure the assignment degree put aside in cloud, darkness, and near to machine, autonomously. Through the theoretical security appraisal and primer assessment, the good judgment of our course of action has been supported, which is actually a historic Added to current dispersed amassing plot.


2020 ◽  
Vol 8 (6) ◽  
pp. 4129-4134

Cloud Computing proves to be most predominant innovative field in the area of Information technology. Cloud is best suited for small scale to large scale businesses and personal purposes such as storing, computing, managing data & resources, running applications and many more. Due to increasing large volumes of data over cloud servers created subsequent specific issues like data maintainability, network elasticity, managing Internet of Things (I.o.T’s) devices and many more. Recent progresses in Technology are given rise to fog computing or decentralized cloud to overcome cloud server issues called fog nodes. In this paper we present a brief note on how cloud issues can overcome using fog nodes benefits along with elaboration of load balancing factor. To maintain load balancing of fog nodes no much appreciable work took place in the field of fog computing. This paper proposes a scheduler which receives the devices in to a Job Queue to be connected over cloud. To apply scheduling algorithms like F.C.F.S, S.J.F, P.S, R.R and W.R.R. over fog nodes will be discussed along with their merits & demerits. At last we try to compare the various parameters of load balancing among various scheduling algorithms. In this paper we focus on how fog nodes perform functions like considerable storages, low latency, heterogeneity, allocation & interaction with limited IoT devices and Security along with architecture cloud to fog. During allocation of IoT devices to various fog nodes we will come across a serious issues i.e load balancing on fog nodes. Our detailed study presents the comparison of above mentioned scheduling algorithms load balancing factors such as rich resources allocations & Balancing among fog nodes, Identification of devices, Authentication of fog nodes, bandwidth consumption, location awareness, response time, cost maintenances, Intrusion detection, fault forbearances and maintainability.


Presently a days, cloud computing is a rising and method for registering in software engineering. Cloud computing is an arrangement of assets and administrations that are offered by the system or web. Distributed computing broadens different figuring methods like framework registering, appropriated processing. Today distributed computing is utilized as a part of both mechanical, research and scholastic fields. Cloud encourages its clients by giving virtual assets through web. As the field of distributed computing is spreading the new procedures are producing for cloud security. This expansion in distributed computing condition likewise expands security challenges for cloud designers. Customers or Users of cloud spare their information in the cloud subsequently the absence of security in cloud can lose the client's trust. In this paper we will discuss about on cloud database and information mining security issues in different viewpoints like multi-occupancy, flexibility, unwavering quality, accessibility on different divisions like modern and research regions, and furthermore examine existing security methods and methodologies for a safe cloud condition through enormous information ideas. What's more, this paper additionally study different parts of mechanical, training and research areas. This paper will empower scientists and experts to think about various security dangers, models and apparatuses proposed in existing distributed storage.


Cloud storage services are quickly increasing and more prevalent. CSP-cloud storage providers offer storage as a service to all the users. It is a paid facility that allows association to outsource their confidential data to be stored on remote servers. But, identity privacy and preserving data from an untrusted cloud is a difficult concern, because of the successive change of the members. CSP has to be secured from an illegitimate person who performs data corruption over cloud servers. Thus, there is a need to safeguard information from the individuals who don’t have access by establishing numerous cloud storage encryption plans. Every such plan implemented expects that distributed storage suppliers are protected and can't be hacked; however, practically speaking, a few powers will compel distributed storage suppliers to render client details and secret information on the cloud, in this manner inside and out bypassing stockpiling encryption plans. In this paper, a new scheme is introduced to protect user privacy by a deniable CP_ABE(Cloud Provider_ Attribute Based Encryption) scheme which implements a cloud storage encryption plan. Since coercers cannot specify whether privileged insights are valid or not, the CSP ensures privacy of the user


Fog computing is one of the most latest technology used by the cloud providers to safe guard the user data and service provider’s data servers. Fog computing acts as mediator between hardware and remote servers or cloud servers. Cloud computing still has the lot of vulnerabilities. Privacy to the users data is main issue in the present cloud computing. Whenever users uploads data into cloud server then user will lose their right on their own data because users don’t know about, what cloud providers do with users data, they can sell the users data for their own profit without knowing to users. Fog computing provides lot of services like operation of computer, storage and networking services between users and cloud computing data centers. With the networking services users can lose their data privacy or leakage without knowing to user. Because public clouds are not secure enough and users doesn’t know where data is storing in cloud servers. Breaking the data into small parts can lead to loss of data and which it can create way to attackers to steal data. Even data might be changed instated of one data with another. Intelligence can be applied in the fog computing technology to use of computing resources and security reasons. Applying multiple layers of security features by using kubernets can improve better service to user and user’s data can be safe from the attackers. Whenever user lost connection with the server kubernets establishes reconnection between user and server. RSA256 encryption is applied to users data with this we can provide better security between cloud server and users.


2020 ◽  
Vol 10 (1) ◽  
pp. 431-443
Author(s):  
Rachna Jain ◽  
Anand Nayyar

AbstractIn the distributed computing worldview, the client and association’s information is put away remotely on the cloud server. Clients and associations can get to applications, administrations, and foundation on-request from a cloud server through the internet, withstanding the various advantages, numerous difficulties, and issues that endure verifying cloud information access and capacity. These difficulties have featured additional security and protection issues as cloud specialist co-ops are exclusively in charge of the capacity and handling of the association’s information out of its physical limits. Hence, a robust security plan is required in order to ensure the association’s touchy information emerges to keep the information shielded and distant from programmers. Over the globe, specialists have proposed fluctuated security structures having an alternate arrangement of security standards with changing computational expense. Down to earth usage of these structures with low calculation cost remains an extreme test to tackle, as security standards have not been characterized.Methodology – To verify the cloud and deal with all security standards, we propose a REGISTRATION AUTHENTICATION STORAGE DATA ACCESS (RASD) structure for giving security to authoritative information put away on cloud catalogs utilizing a novel security plot, for example, HEETPS. A RASD system involves a stage by stage process-Enlistment of clients, Authentication of the client secret key, and Capacity of information just as information access on cloud registry. When the system is connected to cloud servers, all the delicate information put away on the cloud will end up being accessible just to verified clients. The essential favourable position of the proposed RASD structure is its simple usage, high security, and overall less computational expense.Moreover, we propose a homomorphic-private-practical uniformity testing-based plan structured under a schematic calculation Che Aet DPs. This calculation executes homomorphic encryption with subtractive fairness testing, notwithstanding low computational intricacy. To test the security ability, we tried the proposed RASD system with other existing conventions like Privacy-protecting examining convention, group reviewing convention, verified system coding convention, and encoded information preparing with homomorphic re-encryption convention. Findings – Experimentation-based outcomes demonstrated that the RASD structure not only gives a high-security layer for delicate information but also enables a decrease in computational expense and performs better when compared with existing conventions for distributed computing.


2021 ◽  
Vol 336 ◽  
pp. 08003
Author(s):  
Zhijian Qin ◽  
Lin Huo ◽  
Shicong Zhang

Data integrity validation is considered to be an important tool to solve the problem that cloud subscribers cannot accurately know whether there are non-subjective changes in the data they upload to cloud servers. In this paper, a data integrity verification model based on dynamic successor tree index structure, Bloom filter and Merkle tree is proposed. The block labels generated according to the features of the dynamic successor tree index structure can sense whether changes have been made to the user's data, while the Merkle tree can track the cha*nged data blocks, enabling the user to effectively verify the integrity of the data stored in the cloud server and provide more effective protection for data.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Haibin Yang ◽  
Zhengge Yi ◽  
Xu An Wang ◽  
Yunxuan Su ◽  
Zheng Tu ◽  
...  

Now, it is common for patients and medical institutions to outsource their data to cloud storage. This can greatly reduce the burden of medical information management and storage and improve the efficiency of the entire medical industry. In some cases, the group-based cloud storage system is also very common to be used. For example, in an medical enterprise, the employees outsource the working documents to the cloud storage and share them to the colleagues. However, when the working documents are outsourced to the cloud servers, how to ensure their security is a challenge problem for they are not controlled physically by the data owners. In particular, the integrity of the outsourced data should be guaranteed. And the secure cloud auditing protocol is designed to solve this issue. Recently, a lightweight secure auditing scheme for shared data in cloud storage is proposed. Unfortunately, we find this proposal not secure in this paper. It’s easy for the cloud server to forge the authentication label, and thus they can delete all the outsourced data when the cloud server still provide a correct data possession proof, which invalidates the security of the cloud audit protocol. On the basis of the original security auditing protocol, we provide an improved one for the shared data, roughly analysis its security, and the results show our new protocol is secure.


Author(s):  
Deepika. N ◽  
Durga. P ◽  
Gayathri. N ◽  
Murugesan. M

The cloud security is one of the essential roles in cloud, here we can preserve our data into cloud storage. More and more clients would like to keep their data to PCS (public cloud servers) along with the rapid development of cloud computing. Cloud storage services allow users to outsource their data to cloud servers to save local data storage costs. Multiple verification tasks from different users can be performed efficiently by the auditor and the cloud-stored data can be updated dynamically. It makes the clients check whether their outsourced data is kept intact without downloading the whole data. In our system we are using the own auditing based on the token generation. Using this key generation technique compare the key values from original keys we can find out the changes about the file. A novel public verification scheme for cloud storage using in distinguishability obfuscation, which requires a lightweight computation on the auditor and delegate most computation to the cloud. Not only stored also the content will be encrypted in the cloud server. If anyone try to hack at the cloud end is not possible to break the two different blocks. The security of our scheme under the strongest security model. They need first decrypt the files and also combine the splitted files from three different locations. This is not possible by anyone. Anyone can download the files from the server with file holder permission. At the time of download key generated (code based key generation) and it will send to the file owner. We can download the file need to use the key for authentication and some other users want to download file owner permission is necessary.


Author(s):  
K.Makanyadevi, Et. al.

In recent years there are many researchers conducted regarding with cloud storage benefits and its efficiency improvements, but all are raising a question regarding the effectiveness and privacy. The effectiveness of the cloud storage system purely depends on the storage capacity and responsiveness of the server, in which the size of the data is large automatically the responsiveness of the storage server goes down. To avoid this issue many researchers found a lots of solution such as multiple cloud server placements, the local cloud farm fixation and so on. But all are coming too struck with the privacy issues and the security threats are more in the case of such things. These issues need to be resolved with one powerful mechanism as well as providing a good storage mechanism without any security threats. In this paper, a healthcare application is taken into consideration and introduce a new fog based cloud storage system is designed such as Intelligent Fog based Cloud Strategy using Edge Devices (IFCSED), in which this Fog Computing process provides an efficient health data storage structure to the cloud server to maintain the high priority records without considering on regular or non-prioritized records. This proposed strategy follows Edge-based Fog assistance to identify the healthcare data priority utilizing analyzing the records, identify the priority level and classify those priority records from the data and pass that to the remote cloud server as well as keep the remaining non-prioritized records into the local fog server. The fog server data can be back up with every point of interval using data backup logics. These backup assures the data protection and the integrity on the storage medium as well as the proposed approach of IFCSED eliminates the processing delay by using time complexity estimations. The data which is coming from Internet of Things (IoT) based real-world health record will be acquired by using controllers and other related devices, which will be delivered to the Edge Devices for manipulation. In this Edge processing device accumulates the incoming health data and classifies that based on the prioritization logic. The health sensor data which is coming in regular interval with normal sensor values are considered to be the regular non-prioritized data and the health sensor data coming up with some abnormal contents such as mismatched heart rate, increased pressure level and so on are considered as a the prioritized record. The sensor assisted health data will low-priority will be moved to the fog server, which is locally maintained into the environment itself and the sensor assisted health data coming up with high priority will be moved into the remote cloud server. The proposed approach assures the time efficiency, reduction in data loss, data integrity and the storage efficiency, as well as these things, will be proved over the resulting sections with proper graphical results.  


Sign in / Sign up

Export Citation Format

Share Document