remote storage
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 18)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Taek-Young Youn ◽  
Hyun Sook Rhee

As Internet services are widely used in various mobile devices, the amount of data produced by users steadily increases. Meanwhile, the storage capacity of the various devices is limited to cover the increasing amount of data. Therefore, the importance of Internet-connected storage that can be accessed anytime and anywhere is steadily increasing in terms of storing and utilizing a huge amount of data. To use remote storage, data to be stored need to be encrypted for privacy. The storage manager also should be granted the ability to search the data without decrypting them in response to a query. Contrary to the traditional environment, the query to Internet-connected storage is conveyed through an open channel and hence its secrecy should be guaranteed. We propose a secure symmetric keyword search scheme that provides query privacy and is tailored to the equality test on encrypted data. The proposed scheme is efficient since it is based on prime order bilinear groups. We formally prove that our construction satisfies ciphertext confidentiality and keyword privacy based on the hardness of the bilinear Diffie–Hellman (DH) assumption and the decisional 3-party DH assumption.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2673
Author(s):  
Saba Rehman ◽  
Nida Talat Bajwa ◽  
Munam Ali Shah ◽  
Ahmad O. Aseeri ◽  
Adeel Anjum

A cloud computing environment provides a cost-effective way for the end user to store and access private data over remote storage using some Internet connection. The user has access to the data anywhere and at any time. However, the data over the cloud do not remain secure all the time. Since the data are accessible to the end user only by using the interference of a third party, it is prone to breach of authentication and integrity of the data. Moreover, cloud computing allows simultaneous users to access and retrieve their data online over different Internet connections, which leads to the exposure, leakage, and loss of a user’s sensitive data in different locations. Many algorithms and protocols have been developed to maintain the security and integrity of the data using cryptographic algorithms such as the Elliptic Curve Cryptography (ECC). This paper proposes a secure and optimized scheme for sharing data while maintaining data security and integrity over the cloud. The proposed system mainly functions by combining the ECC and the Advanced Encryption Standard (AES) method to ensure authentication and data integrity. The experimental results show that the proposed approach is efficient and yields better results when compared with existing approaches.


2021 ◽  
Vol 14 (11) ◽  
pp. 2432-2444
Author(s):  
Dominik Durner ◽  
Badrish Chandramouli ◽  
Yinan Li

Cloud analytical databases employ a disaggregated storage model, where the elastic compute layer accesses data persisted on remote cloud storage in block-oriented columnar formats. Given the high latency and low bandwidth to remote storage and the limited size of fast local storage, caching data at the compute node is important and has resulted in a renewed interest in caching for analytics. Today, each DBMS builds its own caching solution, usually based on file-or block-level LRU. In this paper, we advocate a new architecture of a smart cache storage system called Crystal , that is co-located with compute. Crystal's clients are DBMS-specific "data sources" with push-down predicates. Similar in spirit to a DBMS, Crystal incorporates query processing and optimization components focusing on efficient caching and serving of single-table hyper-rectangles called regions. Results show that Crystal, with a small DBMS-specific data source connector, can significantly improve query latencies on unmodified Spark and Greenplum while also saving on bandwidth from remote storage.


2021 ◽  
Vol 1 (2) ◽  
pp. 340-364
Author(s):  
Rui Araújo ◽  
António Pinto

Along with the use of cloud-based services, infrastructure, and storage, the use of application logs in business critical applications is a standard practice. Application logs must be stored in an accessible manner in order to be used whenever needed. The debugging of these applications is a common situation where such access is required. Frequently, part of the information contained in logs records is sensitive. In this paper, we evaluate the possibility of storing critical logs in a remote storage while maintaining its confidentiality and server-side search capabilities. To the best of our knowledge, the designed search algorithm is the first to support full Boolean searches combined with field searching and nested queries. We demonstrate its feasibility and timely operation with a prototype implementation that never requires access, by the storage provider, to plain text information. Our solution was able to perform search and decryption operations at a rate of, approximately, 0.05 ms per line. A comparison with the related work allows us to demonstrate its feasibility and conclude that our solution is also the fastest one in indexing operations, the most frequent operations performed.


Author(s):  
Benjamin F. Walker

The George A. Smathers Libraries has a more than 60 year history of utilizing remote storage. Throughout that time, there have been drastic changes in how storage has been managed. It has historically been a somewhat opportunistic endeavor, utilizing facilities designed for other purposes and acquiring and renovating spaces as funding allowed. This article will highlight efforts by the George A. Smathers libraries to store materials remotely, with a particular focus on the 1990s on. Details of the Florida Academic Repository, including the funding proposals, shared print retention developments, and current status will be discussed. Looking back over the history of storage at UF, the George A. Smathers Libraries has invested substantial effort and money in trying to resolve these space issues, most notably with the FLARE legislative proposals. The inability to secure that funding certainly has impeded progress, but FLARE has found other ways to be an important partner in the storage and shared print movement.


2021 ◽  
pp. 20-32
Author(s):  
admin admin ◽  

Recently, the security of heterogeneous multimedia data becomes a very critical issue, substantially with the proliferation of multimedia data and applications. Cloud computing is the hidden back-end for storing heterogeneous multimedia data. Notwithstanding that using cloud storage is indispensable, but the remote storage servers are untrusted. Therefore, one of the most critical challenges is securing multimedia data storage and retrieval from the untrusted cloud servers. This paper applies a Shamir Secrete-Sharing scheme and integrates with cloud computing to guarantee efficiency and security for sensitive multimedia data storage and retrieval. The proposed scheme can fully support the comprehensive and multilevel security control requirements for the cloud-hosted multimedia data and applications. In addition, our scheme is also based on a source transformation that provides powerful mutual interdependence in its encrypted representation—the Share Generator slices and encrypts the multimedia data before sending it to the cloud storage. The extensive experimental evaluation on various configurations confirmed the effectiveness and efficiency of our scheme, which showed excellent performance and compatibility with several implementation strategies.


2021 ◽  
Vol 251 ◽  
pp. 02027
Author(s):  
Vincenzo Eduardo Padulano ◽  
Enric Tejedor Saavedra ◽  
Pedro Alonso-Jordá

Thanks to its RDataFrame interface, ROOT now supports the execution of the same physics analysis code both on a single machine and on a cluster of distributed resources. In the latter scenario, it is common to read the input ROOT datasets over the network from remote storage systems, which often increases the time it takes for physicists to obtain their results. Storing the remote files much closer to where the computations will run can bring latency and execution time down. Such a solution can be improved further by caching only the actual portion of the dataset that will be processed on each machine in the cluster, reusing it in subsequent executions on the same input data. This paper shows the benefits of applying different means of caching input data in a distributed ROOT RDataFrame analysis. Two such mechanisms will be applied to this kind of workflow with different configurations, namely caching on the same nodes that process data or caching on a separate server.


Author(s):  
Yenewondim Biadgie Sinshahw

<span>In medical and scientific imaging, lossless image compression is recommended because the loss of minor details subject to medical diagnosis can lead to wrong diagniosis. On the other hand, lossy compression of medical images is required in the long run because a huge quantity of medical data needs remote storage. This, in turn, takes long time to search and transfer an image. Instead of thinking lossless or lossy image compression methods, near-loss image compression mehod can be used to compromise the two conflicting requirements. In the previous work, an edge adaptive hierarchical interpolation (EAHINT) was proposed for resolution scalable lossless compression of images. In this paper, it was enhanced for scalable near-less image compression. The interpolator of this arlgorithm swiches among one-directional, multi-directional and non-directional linear interpolators adaptively based on the strength of the edge in a 3x3 local casual context of the current pixel being predicted. The strength of the edge in local window was estimated using the variance of the the pixels in the local window. Although the actual predictors are still linear functions, the switching mechanism tried to deal with non-linear structures like edges. Simulation results demonstrate that the improved interpolation algorithm has better compression ratio over the the exsisting the original EAHINT algorithm and JPEG-Ls image compression standard. </span>


Author(s):  
Neda Maleki ◽  
Hamid Reza Faragardi ◽  
Amir Masoud Rahmani ◽  
Mauro Conti ◽  
Jay Lofstead

Abstract In the context of MapReduce task scheduling, many algorithms mainly focus on the scheduling of Reduce tasks with the assumption that scheduling of Map tasks is already done. However, in the cloud deployments of MapReduce, the input data is located on remote storage which indicates the importance of the scheduling of Map tasks as well. In this paper, we propose a two-stage Map and Reduce task scheduler for heterogeneous environments, called TMaR. TMaR schedules Map and Reduce tasks on the servers that minimize the task finish time in each stage, respectively. We employ a dynamic partition binder for Reduce tasks in the Reduce stage to lighten the shuffling traffic. Indeed, TMaR minimizes the makespan of a batch of tasks in heterogeneous environments while considering the network traffic. The simulation results demonstrate that TMaR outperforms Hadoop-stock and Hadoop-A in terms of makespan and network traffic and achieves by an average of 29%, 36%, and 14% performance using Wordcount, Sort, and Grep benchmarks. Besides, the power reduction of TMaR is up to 12%.


Author(s):  
Gokulakrishnan V ◽  
Illakiya B

With the rapidly increasing amounts of data produced worldwide, networked and multi- user storage systems are becoming very popular. However, concerns over data security still prevent many users from migrating data to remote storage. The conventional solution is to encrypt the data before it leaves the owner’s premises. While sound from a security perspective,this approach prevents the storage provider from effectively applying storage efficiency functions, such as compression and deduplication, which would allow optimal usage of the resources and consequently lower service cost. Client-side data deduplication in particular ensures that multiple uploads of the same content only consume network bandwidth and storage space of a single upload. Deduplication is actively used by a number of backup providers as well as various data services. In this project, we present a scheme that permits the storage without duplication of multiple types of files. And also need the intuition is that outsourced data may require different levels of protection. Based on this idea, we design an encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. This way, data deduplication can be effective for popular data, whilst semantically secure encryption protects unpopular content. We can use the backup recover system at the time of blocking and also analyze frequent log in access system.


Sign in / Sign up

Export Citation Format

Share Document