2013 ◽  
Vol 765-767 ◽  
pp. 1087-1091
Author(s):  
Hong Lin ◽  
Shou Gang Chen ◽  
Bao Hui Wang

Recently, with the development of Internet and the coming of new application modes, data storage has some new characters and new requirements. In this paper, a Distributed Computing Framework Mass Small File storage System (For short:Dnet FS) based on Windows Communication Foundation in .Net platform is presented, which is lightweight, good-expansibility, running in cheap hardware platform, supporting Large-scale concurrent access, and having certain fault-tolerance. The framework of this system is analyzed and the performance of this system is tested and compared. All of these prove this system meet requirements.


Author(s):  
Dominik Tomaszuk ◽  
Dominik Kuziński ◽  
Mirek Sopek ◽  
Bogusław Swiecicki
Keyword(s):  

Author(s):  
Zakia Challal ◽  
Wafaa Bala ◽  
Hanifa Mokeddem ◽  
Kamel Boukhalfa ◽  
Omar Boussaid ◽  
...  

Author(s):  
Song Kunfang ◽  
Hongwei Lu

MapReduce is a widely adopted computing framework for data-intensive applications running on clusters. This paper proposed an approach to exploit data parallelisms in XML processing using MapReduce in Hadoop. The authors' solution seamlessly integrates data storage, labeling, indexing, and parallel queries to process a massive amount of XML data. Specifically, the authors introduce an SDN labeling algorithm and a distributed hierarchical index using DHTs. More importantly, an advanced two-phase MapReduce solution are designed that is able to efficiently address the issues of labeling, indexing, and query processing on big XML data. The experimental results show the efficiency and effectiveness of the proposed parallel XML data approach using Hadoop.


Author(s):  
Srinath Srinivasa

Management of graph structured data has important applications in several areas. Queries on such data sets are based on structural properties of the graphs, in addition to values of attributes. Answering such queries pose significant challenges, as reasoning about structural properties across graphs are typically intractable problems. This chapter provides an overview of the challenges in designing databases over graph datasets. Different application areas that use graph databases, pose their own unique set of challenges, making the task of designing a generic graph-oriented DBMS still an elusive goal. The purpose of this chapter is to provide a tutorial introduction to some of the major challenges of graph data management, survey some of the piecemeal solutions that have been proposed, and suggest an overall structure in which these different solutions can be meaningfully placed.


Author(s):  
Kornelije Rabuzin ◽  
Martina Šestak

Nowadays, the increased amount and complexity of connected data stimulated by the appearance of social networks has shed a new light on the importance of managing such data, especially handling information about the connections. The most natural way of representing connected data is to represent them as nodes connected with relationships forming a graph. The idea of storing data as a set of nodes and edges comprising a graph was implemented in various forms in data models used in the past. The network data model, developed in late 1960s, can be considered as the first data model, which most accurately incorporated this idea. However, it was not long before the relational data model appeared, and took over the entire database market for years, which it dominates even nowadays. Therefore, the objective of this article is to give a timeline overview of developed graph data storage solutions in order to gain insight into past, present and future trends of GDBMSs. Additionally, the most influential factors and reasons for changes in trends in GDBMSs' usage will be analyzed.


Author(s):  
Richard S. Chemock

One of the most common tasks in a typical analysis lab is the recording of images. Many analytical techniques (TEM, SEM, and metallography for example) produce images as their primary output. Until recently, the most common method of recording images was by using film. Current PS/2R systems offer very large capacity data storage devices and high resolution displays, making it practical to work with analytical images on PS/2s, thereby sidestepping the traditional film and darkroom steps. This change in operational mode offers many benefits: cost savings, throughput, archiving and searching capabilities as well as direct incorporation of the image data into reports.The conventional way to record images involves film, either sheet film (with its associated wet chemistry) for TEM or PolaroidR film for SEM and light microscopy. Although film is inconvenient, it does have the highest quality of all available image recording techniques. The fine grained film used for TEM has a resolution that would exceed a 4096x4096x16 bit digital image.


Author(s):  
T. A. Dodson ◽  
E. Völkl ◽  
L. F. Allard ◽  
T. A. Nolan

The process of moving to a fully digital microscopy laboratory requires changes in instrumentation, computing hardware, computing software, data storage systems, and data networks, as well as in the operating procedures of each facility. Moving from analog to digital systems in the microscopy laboratory is similar to the instrumentation projects being undertaken in many scientific labs. A central problem of any of these projects is to create the best combination of hardware and software to effectively control the parameters of data collection and then to actually acquire data from the instrument. This problem is particularly acute for the microscopist who wishes to "digitize" the operation of a transmission or scanning electron microscope. Although the basic physics of each type of instrument and the type of data (images & spectra) generated by each are very similar, each manufacturer approaches automation differently. The communications interfaces vary as well as the command language used to control the instrument.


Sign in / Sign up

Export Citation Format

Share Document