persistent data
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 28)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Vol 12 (1) ◽  
pp. 50
Author(s):  
Andrey Fedotov ◽  
Pavel Grishin ◽  
Dmitriy Ivonin ◽  
Mikhail Chernyavskiy ◽  
Eugene Grachev

Nowadays material science involves powerful 3D imaging techniques such as X-ray computed tomography that generates high-resolution images of different structures. These methods are widely used to reveal information about the internal structure of geological cores; therefore, there is a need to develop modern approaches for quantitative analysis of the obtained images, their comparison, and classification. Topological persistence is a useful technique for characterizing the internal structure of 3D images. We show how persistent data analysis provides a useful tool for the classification of porous media structure from 3D images of hydrocarbon reservoirs obtained using computed tomography. We propose a methodology of 3D structure classification based on geometry-topology analysis via persistent homology.


2021 ◽  
Author(s):  
Armin Bunde ◽  
Josef Ludescher ◽  
Hans Joachim Schellnhuber

AbstractWe consider trends in the m seasonal subrecords of a record. To determine the statistical significance of the m trends, one usually determines the p value of each season either numerically or analytically and compares it with a significance level $${{\tilde{\alpha }}}$$ α ~ . We show in great detail for short- and long-term persistent records that this procedure, which is standard in climate science, is inadequate since it produces too many false positives (false discoveries). We specify, on the basis of the family wise error rate and by adapting ideas from multiple testing correction approaches, how the procedure must be changed to obtain more suitable significance criteria for the m trends. Our analysis is valid for data with all kinds of persistence. Specifically for long-term persistent data, we derive simple analytical expressions for the quantities of interest, which allow to determine easily the statistical significance of a trend in a seasonal record. As an application, we focus on 17 Antarctic station data. We show that only four trends in the seasonal temperature data are outside the bounds of natural variability, in marked contrast to earlier conclusions.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5805
Author(s):  
João S. Resende ◽  
Luís Magalhães ◽  
André Brandão ◽  
Rolando Martins ◽  
Luís Antunes

The growing demand for everyday data insights drives the pursuit of more sophisticated infrastructures and artificial intelligence algorithms. When combined with the growing number of interconnected devices, this originates concerns about scalability and privacy. The main problem is that devices can detect the environment and generate large volumes of possibly identifiable data. Public cloud-based technologies have been proposed as a solution, due to their high availability and low entry costs. However, there are growing concerns regarding data privacy, especially with the introduction of the new General Data Protection Regulation, due to the inherent lack of control caused by using off-premise computational resources on which public cloud belongs. Users have no control over the data uploaded to such services as the cloud, which increases the uncontrolled distribution of information to third parties. This work aims to provide a modular approach that uses cloud-of-clouds to store persistent data and reduce upfront costs while allowing information to remain private and under users’ control. In addition to storage, this work also extends focus on usability modules that enable data sharing. Any user can securely share and analyze/compute the uploaded data using private computing without revealing private data. This private computation can be training machine learning (ML) models. To achieve this, we use a combination of state-of-the-art technologies, such as MultiParty Computation (MPC) and K-anonymization to produce a complete system with intrinsic privacy properties.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-29
Author(s):  
Nicolas Krauter ◽  
Patrick Raaf ◽  
Peter Braam ◽  
Reza Salkhordeh ◽  
Sebastian Erdweg ◽  
...  

Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persistent software transactional memory (PSTM) can be tightly integrated into the runtime system of Haskell to atomically manage values of persistent transactional data types. PSTM has a clear interface and semantics extending that of software transactional memory (STM). Its integration with the language’s memory management retains features like garbage collection and allocation strategies, and is fully compatible with Haskell's lazy execution model. Our PSTM implementation demonstrates competitive performance with low level libraries and trivial portability of existing STM libraries to PSTM. The implementation allows further interesting use cases, such as persistent memoization and persistent Haskell expressions.


2021 ◽  
Author(s):  
Haosen Wen ◽  
Wentao Cai ◽  
Mingzhe Du ◽  
Louis Jenkins ◽  
Benjamin Valpey ◽  
...  

Author(s):  
Lei Zeng ◽  
Weiwei Qiu ◽  
Xiaoyi Wang ◽  
Hongkai Wang ◽  
Yiyang Yao ◽  
...  

2021 ◽  
Vol 54 (7) ◽  
pp. 1-37
Author(s):  
Alexandro Baldassin ◽  
João Barreto ◽  
Daniel Castro ◽  
Paolo Romano

The recent rise of byte-addressable non-volatile memory technologies is blurring the dichotomy between memory and storage. In particular, they allow programmers to have direct access to persistent data instead of relying on traditional interfaces, such as file and database systems. However, they also bring new challenges, as a failure may render the program in an unrecoverable and inconsistent state. Consequently, a lot of effort has been put by both industry and academia into making the task of programming with such memories easier while, at the same time, efficient from the runtime perspective. This survey summarizes such a body of research, from the abstractions to the implementation level. As persistent memory is starting to appear commercially, the state-of-the-art research condensed here will help investigators to quickly stay up to date while also motivating others to pursue research in the field.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-31
Author(s):  
Daniel Bittman ◽  
Peter Alvaro ◽  
Pankaj Mehra ◽  
Darrell D. E. Long ◽  
Ethan L. Miller

Byte-addressable, non-volatile memory (NVM) presents an opportunity to rethink the entire system stack. We present Twizzler, an operating system redesign for this near-future. Twizzler removes the kernel from the I/O path, provides programs with memory-style access to persistent data using small (64 bit), object-relative cross-object pointers, and enables simple and efficient long-term sharing of data both between applications and between runs of an application. Twizzler provides a clean-slate programming model for persistent data, realizing the vision of Unix in a world of persistent RAM. We show that Twizzler is simpler, more extensible, and more secure than existing I/O models and implementations by building software for Twizzler and evaluating it on NVM DIMMs. Most persistent pointer operations in Twizzler impose less than 0.5 ns added latency. Twizzler operations are up to faster than Unix , and SQLite queries are up to faster than on PMDK. YCSB workloads ran 1.1– faster on Twizzler than on native and NVM-optimized SQLite backends.


2021 ◽  
Author(s):  
Nathaniel Roquet ◽  
Swapnil P Bhatia ◽  
Sarah A Flickinger ◽  
Sean Mihm ◽  
Michael W Norsworthy ◽  
...  

AbstractPersistent data storage is the basis of all modern information systems. The long-term value and volume of data are growing at an accelerating rate and pushing extant storage systems to their limits. DNA offers exciting potential as a storage medium, but no practical scheme has been proposed to date that can scale beyond narrow-band write rates. Here, we demonstrate a combinatorial DNA data encoding scheme capable of megabits per second write speeds. The system relies on rapid, combinatorial assembly of multiple smaller DNA parts that are dispensed through inkjet printing. To demonstrate this approach, we wrote approximately 25 kB of information into DNA using our system and read the information back out with commercially available nanopore sequencing. Moreover, we demonstrate the ability to replicate and selectively access the information while it is in DNA, opening up the possibility of more sophisticated DNA computation.


2021 ◽  
Vol 18 (2) ◽  
pp. 1-26
Author(s):  
Ramin Izadpanah ◽  
Christina Peterson ◽  
Yan Solihin ◽  
Damian Dechev

Emerging byte-addressable Non-Volatile Memories (NVMs) enable persistent memory where process state can be recovered after crashes. To enable applications to rely on persistent data, durable data structures with failure-atomic operations have been proposed. However, they lack the ability to allow users to execute a sequence of operations as transactions. Meanwhile, persistent transactional memory (PTM) has been proposed by adding durability to Software Transactional Memory (STM). However, PTM suffers from high performance overheads and low scalability due to false aborts, logging, and ordering constraints on persistence. In this article, we propose PETRA, a new approach for constructing persistent transactional linked data structures. PETRA natively supports transactions, but unlike PTM, relies on the high-level information from the data structure semantics. This gives PETRA unique advantages in the form of high performance and high scalability. Our experimental results using various benchmarks demonstrate the scalability of PETRA in all workloads and transaction sizes. PETRA outperforms the state-of-the-art PTMs by an order of magnitude in transactions of size greater than one, and demonstrates superior performance in transactions of size one.


Sign in / Sign up

Export Citation Format

Share Document