file access
Recently Published Documents


TOTAL DOCUMENTS

217
(FIVE YEARS 36)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Nagesh Rajendra Salunke

Abstract: The concept of cloud computing becomes more popular in latest years. Data storage is very important and valuable research field in cloud computing. Cloud based file sharing is a file sharing security in cloud. The required security from unauthorized access of the file in the cloud is provided by the encryption and decryption function. The admin can provide file access option to the authorized users. This facility limits the number and time of access of the shared files by the admin for the authorized user. Cloud data storage technology is the core area in cloud computing and solves the data storage mode of cloud environment. This project introduces the concept of cloud computing and cloud storage as well as the architecture of cloud storage firstly. Then we analyze the cloud data storage technology amazon web services, wasabi, Digital Ocean etc. We will improve the traditional file storage method and we will make a platform which will get more privileges. Keywords: Cloud, Storage, AWS, Wasabi, File Management, Files Storage, Files Sharing, DMS, CMS, Drive store, Private Cloud.


2021 ◽  
Author(s):  
Timothy A. Pitman ◽  
Xiaomeng Huang ◽  
Gabor T Marth ◽  
Yi Qiao

In precision medicine, genomic data needs to be processed as fast as possible to arrive at treatment decisions in a timely fashion. We developed mmbam, a library to allow sequence analysis informatics software to access raw sequencing data stored in BAM files extremely fast. Taking advantage of memory mapped file access and parallel data processing, we demonstrate that analysis software ported to mmbam consistently outperforms their stock versions. Open source and freely available, we envision that mmbam will enable a new generation of high performance informatics tools for precision medicine.


2021 ◽  
Author(s):  
Jianguo Jiang ◽  
Xu Wang ◽  
Yan Wang ◽  
Qiujian Lv ◽  
MeiChen Liu ◽  
...  

Author(s):  
Anisha P Rodrigues ◽  
Roshan Fernandes ◽  
P. Vijaya ◽  
Satish Chander

Hadoop Distributed File System (HDFS) is developed to efficiently store and handle the vast quantity of files in a distributed environment over a cluster of computers. Various commodity hardware forms the Hadoop cluster, which is inexpensive and easily available. The large number of small files stored in HDFS consumed more memory which lags the performance because small files consumed heavy load on NameNode. Thus, the efficiency of indexing and accessing the small files on HDFS is improved by several techniques, such as archive files, New Hadoop Archive (New HAR), CombineFileInputFormat (CFIF), and Sequence file generation. The archive file combines the small files into single blocks. The new HAR file combines the smaller files into a single large file. The CFIF module merges the multiple files into a single split using NameNode, and the sequence file combines all the small files into a single sequence. The indexing and accessing of a small file in HDFS are evaluated using performance metrics, such as processing time and memory usage. The experiment shows that the sequence file generation approach is efficient when compared to other approaches concerning file access time is 1.5[Formula: see text]s, memory usage is 20 KB in multi-node, and the processing time is 0.1[Formula: see text]s.


2021 ◽  
Vol 11 (3) ◽  
pp. 250-255
Author(s):  
Yinyin Wang ◽  
◽  
Yuwang Yang ◽  
Qingguang Wang

An efficient intelligent cache replacement policy suitable for picture archiving and communication systems (PACS) was proposed in this work. By combining the Support vector machine (SVM) with the classic least recently used (LRU) cache replacement policy, we have created a new intelligent cache replacement policy called SVM-LRU. The SVM-LRU policy is unlike conventional cache replacement policies, which are solely dependent on the intrinsic properties of the cached items. Our PACS-oriented SVM-LRU algorithm identifies the variables that affect file access probabilities by mining medical data. The SVM algorithm is then used to model the future access probabilities of the cached items, thus improving cache performance. Finally, a simulation experiment was performed using the trace-driven simulation method. It was shown that the SVM-LRU cache algorithm significantly improves PACS cache performance when compared to conventional cache replacement policies like LRU, LFU, SIZE and GDS.


2021 ◽  
Vol 1864 (1) ◽  
pp. 012095
Author(s):  
N. V. Ermakov ◽  
S. A. Molodyakov
Keyword(s):  

Author(s):  
Jaichandran R , Et. al.

Cloud technology provides advantage of storage services for individuals and organizations thus making file access easy and simple irrespective of location. The major concern is the security while the file is been outsourced. Maintaining integrity, file unchanged, gaining confidentiality during file outsourced plays an important role. In this paper, we propose identity based data outsourcing technique to provide data security during authorization and storage. For data authorization we propose finger print based authentication. The fingerprint based authentication is performed using Minutae Map algorithm (MM). For data security we convert the data owner files to hash values using SHA algorithm. Finally in the cloud storage stage, data security and data availability is addressed using multiple cloud storage system. 


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2060
Author(s):  
Keon-Ho Park ◽  
Seong-Jin Kim ◽  
Joobeom Yun ◽  
Seung-Ho Lim ◽  
Ki-Woong Park

In an internet of things (IoT) platform with a copious number of IoT devices and active variation of operational purpose, IoT devices should be able to dynamically change their system images to play various roles. However, the employment of such features in an IoT platform is hindered by several factors. Firstly, the trivial file transfer protocol (TFTP), which is generally used for network boot, has major security vulnerabilities. Secondly, there is an excessive demand for the server during the network boot, since there are numerous IoT devices requesting system images according to the variation of their roles, which exerts a heavy network overhead on the server. To tackle these challenges, we propose a system termed FLEX-IoT. The proposed system maintains a FLEX-IoT orchestrater which uses an IoT platform operation schedule to flexibly operate the IoT devices in the platform. The IoT platform operation schedule contains the schedules of all the IoT devices on the platform, and the FLEX-IoT orchestrater employs this schedule to flexibly change the mode of system image transfer at each moment. FLEX-IoT consists of a secure TFTP service, which is fully compatible with the conventional TFTP, and a resource-efficient file transfer method (adaptive transfer) to streamline the system performance of the server. The proposed secure TFTP service comprises of a file access control and attacker deception technique. The file access control verifies the identity of the legitimate IoT devices based on the hash chain shared between the IoT device and the server. FLEX-IoT provides security to the TFTP for a flexible IoT platform and minimizes the response time for network boot requests based on adaptive transfer. The proposed system was found to significantly increase the attack-resistance of TFTP with little additional overhead. In addition, the simulation results show that the volume of transferred system images on the server decreased by 27% on average, when using the proposed system.


2021 ◽  
pp. 8-17
Author(s):  
Amer Ramadan ◽  

This paper reports on an in-depth examination of the impact of the backing filesystems to Docker performance in the context of Linux container-based virtualization. The experimental design was a 3x3x4 arrangement, i.e., we considered three different numbers of Docker containers, three filesystems (Ext4, XFS and Btrfs), and four application workloads related to Web server I/O activity, e-mail server I/O activity, file server I/O activity and random file access I/O activity, respectively. The experimental results indicate that Ext4 is the most optimal filesystem, among the considered filesystems, for the considered experimental settings. In addition, the XFS filesystem is not suitable for workloads that are dominated by synchronous random write components (e.g., characteristical for mail workload), while the Btrfs filesystem is not suitable for workloads dominated by random write and sequential write components (e.g., file server workload).


Cloud storage is one of the key features of cloud computing, which helps cloud users outsource large numbers of data without upgrading their devices. However, Cloud Service Providers (CSPs) data storage faces problems with data redundancy. The data deduplication technique aims at eliminating redundant information segments and maintains one single instance of the data set, even if any number of users own similar data set. Since blocks of data are spread on many servers, each block of the file has to be downloaded before restoring the file to decrease system output. We suggest a cloud storage server data recovery module to improve file access efficiency and reduce time spent on network bandwidth. Device coding is used in the suggested method to store blocks in distributed cloud storage, and for data integrity, MD5 (Message Digest 5) is used. Running recovery algorithm helps the user to retrieve a file directly from the cloud servers without downloading every block. The scheme proposed improves system time efficiency and the ability to access the stored data quickly. This reduces bandwidth consumption and reduces overhead user processing while downloading the data file.


Sign in / Sign up

Export Citation Format

Share Document