Fault-tolerant disk storage and file systems using reflective memory

Author(s):  
N. Vekiarides
Author(s):  
Jan Stender ◽  
Michael Berlin ◽  
Alexander Reinefeld

Cloud computing poses new challenges to data storage. While cloud providers use shared distributed hardware, which is inherently unreliable and insecure, cloud users expect their data to be safely and securely stored, available at any time, and accessible in the same way as their locally stored data. In this chapter, the authors present XtreemFS, a file system for the cloud. XtreemFS reconciles the need of cloud providers for cheap scale-out storage solutions with that of cloud users for a reliable, secure, and easy data access. The main contributions of the chapter are: a description of the internal architecture of XtreemFS, which presents an approach to build large-scale distributed POSIX-compliant file systems on top of cheap, off-the-shelf hardware; a description of the XtreemFS security infrastructure, which guarantees an isolation of individual users despite shared and insecure storage and network resources; a comprehensive overview of replication mechanisms in XtreemFS, which guarantee consistency, availability, and durability of data in the face of component failures; an overview of the snapshot infrastructure of XtreemFS, which allows to capture and freeze momentary states of the file system in a scalable and fault-tolerant fashion. The authors also compare XtreemFS with existing solutions and argue for its practicability and potential in the cloud storage market.


Author(s):  
A. Calderón ◽  
F. García-Carballeira ◽  
Florin Isailǎ ◽  
Rainer Keller ◽  
Alexander Schulz

Author(s):  
Zhenpeng Xu ◽  
Hairong Chen ◽  
Weini Zeng

Many new characteristics are introduced in the mobile computing system, such as mobility, disconnections, finite power source, vulnerable to physical damage, lack of stable storage. Many log-based rollback recovery fault tolerant schemes were proposed according to the characteristics. However, these schemes may still lead to dramatic loss of computing performance in failure-free or inconsistent recovery after the process fault. In this paper, a hybrid log-based fault tolerant scheme is proposed combining the checkpointing mechanism with the message logging mechanism. The checkpoint, the logs and the happened-before relations are logged synchronously into the memory at local mobile hosts temporarily, and asynchronously into the persistent disk storage in the form of the antecedence graph at local mobile support station. The proposal supports the independent and propagated consistent recovery. By contrast, the results show that the proposal incurs a lower failure-free overhead on the premise of the consistent recoverability.


2018 ◽  
Vol 7 (3.8) ◽  
pp. 151
Author(s):  
Anjali Deore ◽  
. .

Big Data consist of large scale data which is complicated and diverse, so that new and different types of integration of techniques and technologies are required to uncover various hidden values from such big datasets. Big Data surrounding is used to set up and examine the diverse sorts of information. Big Data be data that is so massive in volume, so various in range or moving with excessive speed is referred to as Big Data. Acquiring and analysing Big Data be a challenging job because it consists of large dispersed file systems which must be bendy, fault tolerant and scalable. Diverse technologies used by big data application toward hold the huge quantity of data are Hadoop, Map Reduce, and so on. In this paper, firstly the description of big dataset is provided. In next section the different technologies are described which are used for managing Big Data. After that, Big Data method application and in last section we discuss the relation of Big Data and IoT as well as IoT for Big Data analytics.  


2008 ◽  
Vol 47 (3) ◽  
pp. 312-334 ◽  
Author(s):  
A. Calderón ◽  
F. García-Carballeira ◽  
L. M. Sánchez ◽  
J. D. García ◽  
J. Fernandez

Author(s):  
Carl E. Henderson

Over the past few years it has become apparent in our multi-user facility that the computer system and software supplied in 1985 with our CAMECA CAMEBAX-MICRO electron microprobe analyzer has the greatest potential for improvement and updating of any component of the instrument. While the standard CAMECA software running on a DEC PDP-11/23+ computer under the RSX-11M operating system can perform almost any task required of the instrument, the commands are not always intuitive and can be difficult to remember for the casual user (of which our laboratory has many). Given the widespread and growing use of other microcomputers (such as PC’s and Macintoshes) by users of the microprobe, the PDP has become the “oddball” and has also fallen behind the state-of-the-art in terms of processing speed and disk storage capabilities. Upgrade paths within products available from DEC are considered to be too expensive for the benefits received. After using a Macintosh for other tasks in the laboratory, such as instrument use and billing records, word processing, and graphics display, its unique and “friendly” user interface suggested an easier-to-use system for computer control of the electron microprobe automation. Specifically a Macintosh IIx was chosen for its capacity for third-party add-on cards used in instrument control.


Sign in / Sign up

Export Citation Format

Share Document