scholarly journals An Optimal Initial Partitioning of Large Data Model in Utility Management Systems

2011 ◽  
Vol 11 (4) ◽  
pp. 41-46 ◽  
Author(s):  
D. CAPKO ◽  
A. ERDELJAN ◽  
M. POPOVIC ◽  
G. SVENDA
2012 ◽  
Vol 120 (4) ◽  
Author(s):  
D. Capko ◽  
A. Erdeljan ◽  
G. Svenda ◽  
M. Popovic

2019 ◽  
Vol 85 ◽  
pp. 07020
Author(s):  
Codrina Maria Ilie ◽  
Radu Constantin Gogu

The purpose of this paper is to present the state-of-art of groundwater geospatial information management, highlighting the relevant data model characteristics and technical implementation of the European Directive 2007/2/EC, also known as the INSPIRE Directive. The maturity of the groundwater geodata management systems is of crucial importance for any kind of activity, be it a research project or an operational service of monitoring, protection or exploitation activities. An ineffective and inadequate geodata management system can significantly increase costs or even overthrow the entire activity ([1-3]). Furthermore, following the technological advancement and the extended scientific and operational interdisciplinary connectivity at national and international scale, the interoperability characteristics are becoming increasingly important in the development of groundwater geospatial information management. From paper recordings to digital spreadsheets, from relational database to standardized data models, the manner in which the groundwater data was gathered, stored, processed and visualized has changed significantly over time. Aside from the clear technical progress, the design that captures the natural connections and dependencies between each groundwater feature and phenomena have also evolved. The second part of our paper address the variations that occurred when outlining the different groundwater geospatial information management models, differences that depict the complexity of hydrogeological data.


2021 ◽  
Vol 14 (11) ◽  
pp. 2230-2243
Author(s):  
Jelle Hellings ◽  
Mohammad Sadoghi

The emergence of blockchains has fueled the development of resilient systems that can deal with Byzantine failures due to crashes, bugs, or even malicious behavior. Recently, we have also seen the exploration of sharding in these resilient systems, this to provide the scalability required by very large data-based applications. Unfortunately, current sharded resilient systems all use system-specific specialized approaches toward sharding that do not provide the flexibility of traditional sharded data management systems. To improve on this situation, we fundamentally look at the design of sharded resilient systems. We do so by introducing BYSHARD, a unifying framework for the study of sharded resilient systems. Within this framework, we show how two-phase commit and two-phase locking ---two techniques central to providing atomicity and isolation in traditional sharded databases---can be implemented efficiently in a Byzantine environment, this with a minimal usage of costly Byzantine resilient primitives. Based on these techniques, we propose eighteen multi-shard transaction processing protocols. Finally, we practically evaluate these protocols and show that each protocol supports high transaction throughput and provides scalability while each striking its own trade-off between throughput, isolation level, latency , and abort rate. As such, our work provides a strong foundation for the development of ACID-compliant general-purpose and flexible sharded resilient data management systems.


2013 ◽  
Vol 336-338 ◽  
pp. 1953-1956
Author(s):  
Qi Ming Lou ◽  
Ying Fang Li ◽  
Hong Wei Zhang

Computer room is an important infrastructure of information technology on education for colleges, how to balance the load, calculate fees flexibly, improve resource utilization, service for the teachers and students better is an urgent problem.Firstly, the development trends of computer room management systems are discussed in the paper. Secondly, gives a data model of open computer room management system, which in order to balance the load and improve the utilization efficiency etc. of computer rooms. Finally, gives the intelligent billing algorithm according to the designed data model, and then implemented the algorithm using stored procedure with SQL Server 2005.


2006 ◽  
Author(s):  
J Steven Hughes ◽  
Daniel Crichton ◽  
Chris Mattmann ◽  
Paul Ramirez

Author(s):  
Srikumar Krishnamoorthy

CFX Inc, an e-commerce start-up based out of India, has built a large e-marketplace that allows sellers and buyers to transact online. The firm currently has 30,000 sellers and aims to have around 50,000 sellers by FY 2015–16. In order to provide best shopping experience to their growing customer base, the firm needs to collect, store and analyze different kinds of data and improve their customer shopping experience. It is in the process of identifying and designing suitable data management systems to sustain and manage their business growth. The management needs a concrete set of recommendations in terms of the nature of solution, choice of the database, a data model that suits CFX's requirements, cost-benefit trade-offs involved and implementation considerations.


2019 ◽  
Vol 135 ◽  
pp. 04076 ◽  
Author(s):  
Marina Bolsunovskaya ◽  
Svetlana Shirokova ◽  
Aleksandra Loginova

This paper is devoted to the problem of developing and application of data storage systems (DSS) and tools for managing such systems to predict failures and provide fault tolerance specifications. Nowadays DSS are widely used for collecting data in Smart Home and Smart Cites management systems. For example, large data warehouses are utilized in traffic management systems. The results of the current data storage market state analysis are shown, and the project the purpose of which is to develop a hardware and software complex to predict failures in the storage system is presented.


2018 ◽  
Vol 37 (3) ◽  
pp. 29-49
Author(s):  
Kumar Sharma ◽  
Ujjal Marjit ◽  
Utpal Biswas

Resource Description Framework (RDF) is a commonly used data model in the Semantic Web environment. Libraries and various other communities have been using the RDF data model to store valuable data after it is extracted from traditional storage systems. However, because of the large volume of the data, processing and storing it is becoming a nightmare for traditional data-management tools. This challenge demands a scalable and distributed system that can manage data in parallel. In this article, a distributed solution is proposed for efficiently processing and storing the large volume of library linked data stored in traditional storage systems. Apache Spark is used for parallel processing of large data sets and a column-oriented schema is proposed for storing RDF data. The storage system is built on top of Hadoop Distributed File Systems (HDFS) and uses the Apache Parquet format to store data in a compressed form. The experimental evaluation showed that storage requirements were reduced significantly as compared to Jena TDB, Sesame, RDF/XML, and N-Triples file formats. SPARQL queries are processed using Spark SQL to query the compressed data. The experimental evaluation showed a good query response time, which significantly reduces as the number of worker nodes increases.


Sign in / Sign up

Export Citation Format

Share Document