scholarly journals Distributed Digital Preservation in the Cloud

2013 ◽  
Vol 8 (1) ◽  
pp. 107-119 ◽  
Author(s):  
David S. H. Rosenthal ◽  
Daniel L. Vargas

The LOCKSS system is a leading technology in the field of Distributed Digital Preservation. Libraries run LOCKSS boxes to collect and preserve content published on the Web in PC servers with local disk storage. They form nodes in a network that continually audits their content and repairs any damage. Libraries wondered whether they could use cloud storage for their LOCKSS boxes instead of local disks. We review the possible configurations, evaluate their technical feasibility, assess their economic feasibility, report on an experiment in which we ran a production LOCKSS box in Amazon’s cloud service, and describe some simulations of future costs of cloud and local storage. We conclude that current cloud storage services are not cost-competitive with local hardware for long term storage, including for LOCKSS boxes.

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Ohmin Kwon ◽  
Dongyoung Koo ◽  
Yongjoo Shin ◽  
Hyunsoo Yoon

With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.


2020 ◽  
Author(s):  
Ward Fisher ◽  
Dennis Heimbigner

<p>NetCDF has historically offered two different storage formats for the netCDF data model: files based on the original netCDF binary format, and files based on the HDF5 format. While this has proven effective in the past for traditional disk storage, it is less efficient for modern cloud-focused technologies such as those provided by Amazon S3, Microsoft Azure, IBM Cloud Object Storage, and other cloud service providers. As with the decision to base the netCDF Extended Data Model and File Format on the HDF5 technology, we do not want to reinvent the wheel when it comes to cloud storage. There are a number of existing technologies that the netCDF team can use to implement native object storage capabilities. Zarr enjoys broad popularity within the Unidata community, particularly among our Python users. By integrating support for the latest Zarr specification (while not locking ourselves in to a specific version), we will be able to provide the broadest support for data written by other software packages which use the latest Zarr specification.</p>


2019 ◽  
Vol 214 ◽  
pp. 03039
Author(s):  
Artem Petrosyan

Migration of COMPASS data processing to Grid environment has started in 2015 from a small prototype, deployed on a single virtual machine. Since summer of 2017, the system works in production mode, distributing jobs to two traditional Grid sites: CERN and JINR. Now the infrastructure of COMPASS Grid Production System includes 6 virtual machines, each is reserved for one production service: database, PanDA, Auto Pilot Factory, Monitoring, CRIC information system and, finally, production system (ProdSys) management instance, which provides a user interface for production manager and hosts services of automatic processing. Support of COMPASS virtual organization is provided by CERN IT. CRIC is also deployed at CERN Cloud Service. Other ProdSys services are deployed at JINR Cloud Service. There are two storage elements at CERN: EOS for short-term storage and Castor for long-term storage. During last year, along with providing a 24/7 service, the system was instrumented by many features, which allow automating data processing as much as possible. Recently, Blue Waters HPC has become a member of the computing infrastructure of the experiment. Details of implementation, workflow management, and infrastructure overview are presented in this article.


T-Comm ◽  
2021 ◽  
Vol 15 (2) ◽  
pp. 46-53
Author(s):  
Veronika M. Antonova ◽  
◽  
Elena E. Malikova ◽  
Alexey E. Panov ◽  
Igor V. Spichek ◽  
...  

An operating device has been designed for long term aggregating, storing and visualizing climate records with a view to their further publication in a cloud service. In order to address the given problem a number of technical issues related to the device concept development and its operating algorithms were solved. The device runs using the MQTT protocol and a microcontroller unit based on ESP8266 chip which is designated for the application in Internet of Things (IoT) devices. The designed system is based on open-source software and allows providing access to the received data to all authorized users. The system is expanded easily since the number of attached sensors and peripheral units can change and the program can be transformed so as to solve emerging tasks. The ability to connect the Internet from any access point provides the mobility for the device and permits to make measurements within the range of a Wi-Fi network. In some instances, it is convenient to use smart phones or tablets that have access to the Internet via cellular networks for research and scientific experiments. In this case, mobile devices can act as monitors to control the system operation. This device can be useful for carrying out research work when data collection over a long period of time and long-term storage of information with the possibility of its further processing are essential. The examples are automatic monitoring of the equipment, medical supervision of patients’ health or gathering and processing of various climate parameters. Undergraduate students can also make use of the developed device when studying IoT technology.


2004 ◽  
Vol 33 (2) ◽  
Author(s):  
Susanne Dobratz ◽  
Heike Neuroth

Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (


Author(s):  
Allen Angel ◽  
Kathryn A. Jakes

Fabrics recovered from archaeological sites often are so badly degraded that fiber identification based on physical morphology is difficult. Although diagenetic changes may be viewed as destructive to factors necessary for the discernment of fiber information, changes occurring during any stage of a fiber's lifetime leave a record within the fiber's chemical and physical structure. These alterations may offer valuable clues to understanding the conditions of the fiber's growth, fiber preparation and fabric processing technology and conditions of burial or long term storage (1).Energy dispersive spectrometry has been reported to be suitable for determination of mordant treatment on historic fibers (2,3) and has been used to characterize metal wrapping of combination yarns (4,5). In this study, a technique is developed which provides fractured cross sections of fibers for x-ray analysis and elemental mapping. In addition, backscattered electron imaging (BSI) and energy dispersive x-ray microanalysis (EDS) are utilized to correlate elements to their distribution in fibers.


2014 ◽  
Vol 3 ◽  
pp. 183-195
Author(s):  
Elena Macevičiūtė

The article deals with the requirements and needs for long-term digital preservation in different areas of scholarly work. The concept of long-term digital preservation is introduced by comparing it to digitization and archiving concepts and defined with the emphasis on dynamic activity within a certain time line. The structure of digital preservation is presented with regard to the elements of the activity as understood in Activity Theory. The life-cycle of digitization processes forms the basis of the main processing of preserved data in preservation archival system.The author draws on the differences between humanities and social sciences on one hand and natural and technological science on the other. The empirical data characterizing the needs for digital preservation within different areas of scholarship are presented and show the difference in approaches to long-term digital preservation, as well as differences in selecting the items and implementing the projects of digital preservation. Institutions and organizations can also develop different understanding of preservation requirements for digital documents and other objects.The final part of the paper is devoted to some general problems pertaining to the longterm digital preservation with the emphasis of the responsibility for the whole process of safe-guarding the cultural and scholarly heritage for the re-use of the posterior generations. It is suggested that the longevity of the libraries in comparison with much shorter life-span of private companies strengthens the claim of memory institutions to playing the central role in the long-term digital preservation.


2001 ◽  
Vol 6 (2) ◽  
pp. 3-14 ◽  
Author(s):  
R. Baronas ◽  
F. Ivanauskas ◽  
I. Juodeikienė ◽  
A. Kajalavičius

A model of moisture movement in wood is presented in this paper in a two-dimensional-in-space formulation. The finite-difference technique has been used in order to obtain the solution of the problem. The model was applied to predict the moisture content in sawn boards from pine during long term storage under outdoor climatic conditions. The satisfactory agreement between the numerical solution and experimental data was obtained.


Sign in / Sign up

Export Citation Format

Share Document