scholarly journals Cryptographic Hashing Method using for Secure and Similarity Detection in Distributed Cloud Data

Author(s):  
A. Mohamed Divan Masood ◽  
S. K. Muthusundar

<p>The explosive increase of data brings new challenges to the data storage and supervision in cloud settings. These data typically have to be processed in an appropriate fashion in the cloud. Thus, any improved latency may originanimmense loss to the enterprises. Duplication detection plays a very main role in data management. Data deduplication calculates an exclusive fingerprint for each data chunk by using hash algorithms such as MD5 and SHA-1. The designed fingerprint is then comparing against other accessible chunks in a database that dedicates for storing the chunks. As an outcome, Deduplication system improves storage consumption while reducing reliability. Besides, the face of privacy for responsive data also arises while they are outsourced by users to cloud. Aiming to deal with the above security challenges, this paper makes the first effort to honor the notion of distributed dependable Deduplication system. We offer new distributed Deduplication systems with privileged reliability in which the data chunks are distributed across a variety of cloud servers. The protection needs an different of using convergent encryption as in foregoing Deduplication systems.</p>

2018 ◽  
Vol 7 (3.12) ◽  
pp. 437
Author(s):  
R Aditya Balaji ◽  
R Pragadeeeshwaran ◽  
G K. Sandhia

The most common cloud service is Data Storage. In order to reduce the storage space, deduplication is used. Data deduplication is a process of removing redundant copies of same data. If a file which is already present in the cloud, is uploaded by the same user or different user, then it will not be uploaded again. Therefore storage required is decreased but reliability is also reduced. Data are encrypted and stored in cloud to protect the privacy of users and this introduces new challenges. The proposed system uses M3 algorithm for encryption and Chunking technique for deduplication. The results of the evaluation show that the security and reliability are increased in the proposed scheme.  


Author(s):  
Ping Lin ◽  
K. Selçuk Candan

The cost of creating and maintaining software and hardware infrastructures for delivering web services led to a notable trend toward the use of application service providers (ASPs) and, more generally, distributed application hosting services (DAHSs). The emergence of enabling technologies, such as J2EE and .NET, has contributed to the acceleration of this trend. DAHSs rent out Internet presence, computation power, and data storage space to clients with infrastructural needs. Consequently, they are cheap and effective outsourcing solutions for achieving increased service availability and scalability in the face of surges in demand. However, ASPs and DAHSs operate within the complex, multi-tiered, and open Internet environment and, hence, they introduce many security challenges that have to be addressed effectively to convince customers that outsourcing their IT needs is a viable alternative to deploying complex infrastructures locally. In this chapter, we provide an overview of typical security challenges faced by DAHSs, introduce dominant security mechanisms available at the different tiers in the information management hierarchy, and discuss open challenges


Cloud computing is a creating worldview to give dependable and versatile framework permitting the clients (data proprietors) to store their data and the data customers (clients) can get to the data from cloud servers. This paradigm decreases stockpiling and upkeep cost of the data proprietor. However, cloud data storage still gives rise to security related problems. In case of shared data, the data face both cloud-specific and insider threats. In this work, we propose FOA( fruit fly optimization algorithm ) optimized centrality measure fragmentation and replication of information in the cloud for optimum performance and security that consider both security and performance issues. FOA is a technique for deducing global optimization based on the foraging character of the fruit fly. The sensory perception of the fruit fly is superior than that of other species, particularly the sense of smell and vision . In our methodology, we divide a data files and replicate the fragmented data over the cloud nodes using FOA centrality measures. Every one of the cloud node just store a single information data fragment that ensures even if there arise an occurrence of a successful attack ,no important information is shown to the attacker .We also compare the performance of the our methodology with other standard replication schemes. Observed results shows higher level of security and performance imrpovements.


Cloud computing is a creating worldview to give reliable and resilient infrastructure permitting the clients (data proprietors) to store their data and the data purchasers (clients) can get to the data from cloud servers. This worldview decreases storage and maintenance cost of the data proprietor. Notwithstanding, cloud data storage still offers ascend to security related issues. In the event of shared data, the data face both cloud-explicit and insider threats. In this work, we propose fuzzy centrality measure based division and replication of data in the cloud for perfect execution and security that consider both security and execution issues. In our framework, we separate a data records and imitate the isolated data over the cloud center points utilizing fuzzy centrality measures. Every one of the nodes stores just a solitary data fragment of a particular data document that guarantees that even if there should arise an occurrence of a fruitful attack, no significant information is uncovered to the attacker. In addition, the cloud nodes storing the data fragments, are separated with certain distance by methods for altered fuzzy T-coloring to prohibit an attacker of predicting the locations of the fragments. We likewise contrast the exhibition of the our methodology and other standard replication plans. The greater amount of security with improved performance was observed.


Author(s):  
D. A. Perepelkin ◽  
◽  
A. N. Saprykin ◽  
M. A. Ivanchikova ◽  
S. S. Kosorukov ◽  
...  

2018 ◽  
Vol 6 (3) ◽  
pp. 113-117 ◽  
Author(s):  
Mustafa I. Khaleel

Power consumption in datacenters has become an emerging concern for the cloud providers. This poses enormous challenges for the programmers to motivate new paradigms to enhance the efficiency of cloud resources through designing innovative energy-aware algorithms. However, balancing the weights over geographically dispersed datacenters has been shown to be essential in decreasing the temperature consumption per datacenter. In this paper, we have formulated a load balancing paradigm to exploit the idea of scheduling scientific workflows over distributed cloud resources to make system outcome more efficient. The proposed heuristic works based on three constraints. First, initiating cloud resource locality for tenants and calculating the shortest distance in order to direct module applications to the closet resources and conserving more bandwidth cost. Second, selecting the most temperature aware datacenters based on geographical climate to maintain electricity cost for the providers. Third, running multiple datacenters within the same geographical location instead of housing the entire workloads in a single datacenter. This allows providers to take a tremendous advantage of sustaining the system from degradation or even unpredictable failure which in turn will frustrate the tenants. Furthermore, applications are formulated as Directed Acyclic Graph (DAG)-structured workflow. For the underlying cloud hardware, our model groups the cloud servers to communicate as if they were in the same physical location. Additionally, both modes, on-demand and reservation, are supported in our algorithm. Finally, the simulation showed that our method was able to enhance the utilization rates about 67% compared to the baseline model.


Author(s):  
Pradeep Nayak ◽  
Poornachandra S ◽  
Pawan J Acharya ◽  
Shravya ◽  
Shravani

Deduplication methods were designed to destroy copy information which bring about capacity of single duplicates of information as it were. Information Deduplication diminishes the circle space needed to store the back-ups in the extra room, tracks and kill the second duplicate of information inside the capacity unit. It permits as it were one case information event to be put away initially and afterward following occasions will be given reference pointer to the first information put away. In a Big information stockpiling climate, immense measure of information should be secure. For this legitimate administration, work, misrepresentation identification, investigation of information protection is an significant theme to be thought of. This paper inspects and assesses the common deduplication procedures and which are introduced in plain structure. In this review, it was seen that the secrecy and security of information has been undermined at numerous levels in common strategies for deduplication. Albeit much exploration is being done in different zones of distributed computing still work relating to this point is inadequate. To get rid of duplicate data which results in storage of single copies of data, data deduplication techniques were used. Data deduplication helps in decreasing storage capacity requirements and eliminates extra copies of same data inside storage unit. Proper management, work, fraud detection, analysis of data privacy are the topics to be considered in a big data storage environment, since, large amount of data needs to be secure. At many levels in general techniques for deduplication it is observed that safety of data and confidentiality has been compromised. Even though more research is being carried out in different areas of cloud computing still work related to this topic is little.


2008 ◽  
pp. 2187-2220
Author(s):  
Ping Lin ◽  
Selcuk Candan

The cost of creating and maintaining software and hardware infrastructures for delivering web services led to a notable trend toward the use of application service providers (ASPs) and, more generally, distributed application hosting services (DAHSs). The emergence of enabling technologies, such as J2EE and .NET, has contributed to the acceleration of this trend. DAHSs rent out Internet presence, computation power, and data storage space to clients with infrastructural needs. Consequently, they are cheap and effective outsourcing solutions for achieving increased service availability and scalability in the face of surges in demand. However, ASPs and DAHSs operate within the complex, multi-tiered, and open Internet environment and, hence, they introduce many security challenges that have to be addressed effectively to convince customers that outsourcing their IT needs is a viable alternative to deploying complex infrastructures locally. In this chapter, we provide an overview of typical security challenges faced by DAHSs, introduce dominant security mechanisms available at the different tiers in the information management hierarchy, and discuss open challenges


Sign in / Sign up

Export Citation Format

Share Document