A Survey on Different Storage Architectures

2021 ◽  
Vol 23 (05) ◽  
pp. 791-796
Author(s):  
Rahul Jyoti ◽  
◽  
Saumitra Kulkarni ◽  
Kirti Wanjale ◽  
◽  
...  

This paper surveys different types of data storage architectures used in today’s world to store data of various types and presents an in-depth analysis of the use cases and drawbacks of the various storage architectures in different scenarios. We also examine the limitations of the traditional storage architectures in today’s world and also discuss the modern solutions to these problems. In this survey paper, we have provided a detailed comparison of three different storage architectures namely file storage, block storage, and object storage. This paper provides sufficient information to the reader to be able to choose among these architectures for storing their data as per the use case.

2020 ◽  
Vol 245 ◽  
pp. 04038 ◽  
Author(s):  
Luca Mascetti ◽  
Maria Arsuaga Rios ◽  
Enrico Bocchi ◽  
Joao Calado Vicente ◽  
Belinda Chan Kwok Cheong ◽  
...  

The CERN IT Storage group operates multiple distributed storage systems to support all CERN data storage requirements: the physics data generated by LHC and non-LHC experiments; object and file storage for infrastructure services; block storage for the CERN cloud system; filesystems for general use and specialized HPC clusters; content distribution filesystem for software distribution and condition databases; and sync&share cloud storage for end-user files. The total integrated capacity of these systems exceeds 0.6 Exabyte. Large-scale experiment data taking has been supported by EOS and CASTOR for the last 10+ years. Particular highlights for 2018 include the special HeavyIon run which was the last part of the LHC Run2 Programme: the IT storage systems sustained over 10GB/s to flawlessly collect and archive more than 13 PB of data in a single month. While the tape archival continues to be handled by CASTOR, the effort to migrate the current experiment workflows to the new CERN Tape Archive system (CTA) is underway. Ceph infrastructure has operated for more than 5 years to provide block storage to CERN IT private OpenStack cloud, a shared filesystem (CephFS) to HPC clusters and NFS storage to replace commercial Filers. S3 service was introduced in 2018, following increased user requirements for S3-compatible object storage from physics experiments and IT use-cases. Since its introduction in 2014N, CERNBox has become a ubiquitous cloud storage interface for all CERN user groups: physicists, engineers and administration. CERNBox provides easy access to multi-petabyte data stores from a multitude of mobile and desktop devices and all mainstream, modern operating systems (Linux, Windows, macOS, Android, iOS). CERNBox provides synchronized storage for end-user’s devices as well as easy sharing for individual users and e-groups. CERNBox has also become a storage platform to host online applications to process the data such as SWAN (Service for Web-based Analysis) as well as file editors such as Collabora Online, Only Office, Draw.IO and more. An increasing number of online applications in the Windows infrastructure uses CIFS/SMB access to CERNBox files. CVMFS provides software repositories for all experiments across the WLCG infrastructure and has recently been optimized to efficiently handle nightlybuilds. While AFS continues to provide general-purpose filesystem for internal CERN users, especially as $HOME login area on central computing infrastructure, the migration of project and web spaces has significantly advanced. In this paper, we report on the experiences from the last year of LHC RUN2 data taking and evolution of our services in the past year.. We will highlight upcoming changes and future improvements and challenges.


2019 ◽  
pp. 25-30
Author(s):  
Vadim Shevtsov ◽  
Evgeny Abramov

Today, Storage Area Network and Cloud Storage are the common Storage System. Storage Area Network includes NAS, SAN, DAS systems. Cloud Storage includes object storage, file storage, block storage. Storage Area Network is an important technology because it may give a lot of data volume with a high recovery chance and secure access, work and central management with data. Cloud Storage has many advantages: data mobility, teamwork, stability, scalability, quick start. The main threats include destruction, theft, corruption, unauthentication, replacement, blocking. Storage Area Network components (architecture elements, protocols, interfaces, hardware, system software, exploitation) have a lot of vulnerabilities. Cloud Storage may be attacked by software, functional elements, clients, hypervisor, management systems. A lot of companies design storage solutions: DropBox, QNAP, WD, DELL, SEAGATE.


2013 ◽  
Vol 765-767 ◽  
pp. 1087-1091
Author(s):  
Hong Lin ◽  
Shou Gang Chen ◽  
Bao Hui Wang

Recently, with the development of Internet and the coming of new application modes, data storage has some new characters and new requirements. In this paper, a Distributed Computing Framework Mass Small File storage System (For short:Dnet FS) based on Windows Communication Foundation in .Net platform is presented, which is lightweight, good-expansibility, running in cheap hardware platform, supporting Large-scale concurrent access, and having certain fault-tolerance. The framework of this system is analyzed and the performance of this system is tested and compared. All of these prove this system meet requirements.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Paula M. Vergara ◽  
Enrique de la Cal ◽  
José R. Villar ◽  
Víctor M. González ◽  
Javier Sedano

Epilepsy is a chronic neurological disorder with several different types of seizures, some of them characterized by involuntary recurrent convulsions, which have a great impact on the everyday life of the patients. Several solutions have been proposed in the literature to detect this type of seizures and to monitor the patient; however, these approaches lack in ergonomic issues and in the suitable integration with the health system. This research makes an in-depth analysis of the main factors that an epileptic detection and monitoring tool should accomplish. Furthermore, we introduce the architecture for a specific epilepsy detection and monitoring platform, fulfilling these factors. Special attention has been given to the part of the system the patient should wear, providing details of this part of the platform. Finally, a partial implementation has been deployed and several tests have been proposed and carried out in order to make some design decisions.


2021 ◽  
pp. 1-13
Author(s):  
Fernando Rebollar ◽  
Rocío Aldeco-Perez ◽  
Marco A. Ramos

The general population increasingly uses digital services, meaning services which are delivered over the internet or an electronic network, and events such as pandemics have accelerated the need of using new digital services. Governments have also increased their number of digital services, however, these digital services still lack of sufficient information security, particularly integrity. Blockchain uses cryptographic techniques that allow decentralization and increase the integrity of the information it handles, but it still has disadvantages in terms of efficiency, making it incapable of implementing some digital services where a high rate of transactions are required. In order to increase its efficient, a multi-layer proposal based on blockchain is presented. It has four layers, where each layer specializes in a different type of information and uses properties of public blockchain and private blockchain. An statistical analysis is performed and the proposal is modeled showing that it maintains and even increases the integrity of the information while preserving the efficiency of transactions. Besides, the proposal can be flexible and adapt to different types of digital services. It also considers that voluntary nodes participate in the decentralization of information making it more secure, verifiable, transparent and reliable.


Author(s):  
Reymon M Santiañez ◽  
Benedict M Sollano

The goal of this study was to create the Local Area Network Based Archiving System, a cross-platform development system for electronic information storage, security, preservation, and retention. The system incorporates capabilities such as data storage for long-term preservation and retrieval, file searching and retrieval, security features such as user account information system and account access privilege levels, and an email-like messaging system. The researchers developed the Local Area Network Based Archiving System using the Agile Software Development Methodology to keep up with the stakeholders' ever-changing needs. After each iteration of the work cycle, this methodology employs a process of frequent feedback. Features are added or refined in each iteration to ensure that the study meets its goals and expectations. The developed system received an overall average weighted mean of 4.53 in the evaluation summary, which is considered excellent. The strongest point of the system, according to the respondents' responses, was its content, which received the highest average mean among the five major categories in the system evaluation. The system's mobile responsiveness was a huge plus, as it considerably aided accessibility. The system should also be deployed, according to the respondents, because it will provide a powerful answer to the ongoing challenges with storing, managing, securing, and retrieving electronic files. As a result, the researchers concluded that a Local Area Network Based Archiving System is required for the efficient operation of an electronic  file storage system. Having centralized electronic file storage and retrieval system not only saves time and money in the long run but also allows for disaster recovery and business continuity.


Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 142
Author(s):  
Obadah Hammoud ◽  
Ivan Tarkhanov ◽  
Artyom Kosmarski

This paper investigates the problem of distributed storage of electronic documents (both metadata and files) in decentralized blockchain-based b2b systems (DApps). The need to reduce the cost of implementing such systems and the insufficient elaboration of the issue of storing big data in DLT are considered. An approach for building such systems is proposed, which allows optimizing the size of the required storage (by using Erasure coding) and simultaneously providing secure data storage in geographically distributed systems of a company, or within a consortium of companies. The novelty of this solution is that we are the first who combine enterprise DLT with distributed file storage, in which the availability of files is controlled. The results of our experiment demonstrate that the speed of the described DApp is comparable to known b2c torrent projects, and subsequently justify the choice of Hyperledger Fabric and Ethereum Enterprise for its use. Obtained test results show that public blockchain networks are not suitable for creating such a b2b system. The proposed system solves the main challenges of distributed data storage by grouping data into clusters and managing them with a load balancer, while preventing data tempering using a blockchain network. The considered DApps storage methodology easily scales horizontally in terms of distributed file storage and can be deployed on cloud computing technologies, while minimizing the required storage space. We compare this approach with known methods of file storage in distributed systems, including central storage, torrents, IPFS, and Storj. The reliability of this approach is calculated and the result is compared to traditional solutions based on full backup.


Author(s):  
David J. Harvey ◽  
Weston B. Struwe ◽  
Anna-Janina Behrens ◽  
Snezana Vasiljevic ◽  
Max Crispin

AbstractStructural determination of N-glycans by mass spectrometry is ideally performed by negative ion collision-induced dissociation because the spectra are dominated by cross-ring fragments leading to ions that reveal structural details not available by many other methods. Most glycans form [M – H]- or [M + adduct]- ions but larger ones (above approx. m/z 2000) typically form doubly charged ions. Differences have been reported between the fragmentation of singly and doubly charged ions but a detailed comparison does not appear to have been reported. In addition to [M + adduct]- ions (this paper uses phosphate as the adduct) other doubly, triply, and quadruply charged ions of composition [Mn + (H2PO4)n]n- have been observed in mixtures of N-glycans released from viral and other glycoproteins. This paper explores the formation and fragmentation of these different types of multiply charged ions with particular reference to the presence of diagnostic fragments in the CID spectra and comments on how these ions can be used to characterize these glycans. Graphical abstract


2020 ◽  
Vol 245 ◽  
pp. 04027
Author(s):  
X. Espinal ◽  
S. Jezequel ◽  
M. Schulz ◽  
A. Sciabà ◽  
I. Vukotic ◽  
...  

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.Several of these scenarios have been evaluated quantitatively, such as the Data Lake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of Data Lakes, where the use of streaming caches has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments.


Author(s):  
Eric H. Pool

D. 41,2,3,21 turns on the issue of how possessio is to be divided. Understanding its content presupposes making a distinction that was self-evident for the Roman jurist but has never been made by later scholars of Roman law. They do not distinguish the varying ‘causes’ of possession (pro emptore … pro suo) which mark different types of lawful possession, and the ‘causes’ of acquisition (causae adquirendi) which justify obtaining possesion as by an owner. Taking a legally valid sale as an example the distinctive features of (possessio) pro emptore in contrast to emptio are established as well as their relevance for procedural practice. In particular there are no less than six forms of action in the law of inheritance for which these features are relevant. Next, the many negative effects of failing to make this distinction are indicated. There follows an in depth analysis and interpretation of the main phrases in Paul’s text: (i) quod nostrum non est; (ii) causae ad­­quirendi, in particular iustae causae traditionis; (iii) unum genus possidendi; (iv) species infinitae.


Sign in / Sign up

Export Citation Format

Share Document