scholarly journals Switching Towards a Proactive Grid Based Data Management Approach

Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analysed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.

Author(s):  
Niels Kinneging ◽  
Meinte Blaas ◽  
Arjen Boon ◽  
Kees Borst ◽  
Gerrit Hendriksen ◽  
...  

Monitoring of the environmental effects of a harbour extension and the compensation measures is a very complex task. The Voordelta area has high natural values, but is also of high economic importance. To implement a monitoring strategy for this area a multidisciplinary consortium has been formed, consisting of a number of institutes and companies. A central data management facility was set up for data storage and management. This chapter illustrates the data management approach using the Voordelta monitoring programme for the years 2004 to 2013. A central data management facility was set up for data storage and management. A repository gives access to raw data files to all team members. From the analysis of the raw data a number of information products have been developed and disseminated to the authorities and the public through Google Earth. It will be shown, that the presence of a strong multidisciplinary team and good collaboration is the key to success in this complex programme. The way the data have been managed supports this process enormously.


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


2016 ◽  
Vol 108 (1) ◽  
pp. 441-455 ◽  
Author(s):  
Cinzia Daraio ◽  
Maurizio Lenzerini ◽  
Claudio Leporelli ◽  
Paolo Naggar ◽  
Andrea Bonaccorsi ◽  
...  

GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Daniel Arend ◽  
Patrick König ◽  
Astrid Junker ◽  
Uwe Scholz ◽  
Matthias Lange

Abstract Background The FAIR data principle as a commitment to support long-term research data management is widely accepted in the scientific community. Although the ELIXIR Core Data Resources and other established infrastructures provide comprehensive and long-term stable services and platforms for FAIR data management, a large quantity of research data is still hidden or at risk of getting lost. Currently, high-throughput plant genomics and phenomics technologies are producing research data in abundance, the storage of which is not covered by established core databases. This concerns the data volume, e.g., time series of images or high-resolution hyper-spectral data; the quality of data formatting and annotation, e.g., with regard to structure and annotation specifications of core databases; uncovered data domains; or organizational constraints prohibiting primary data storage outside institional boundaries. Results To share these potentially dark data in a FAIR way and master these challenges the ELIXIR Germany/de.NBI service Plant Genomic and Phenomics Research Data Repository (PGP) implements a “bring the infrastructure to the data” approach, which allows research data to be kept in place and wrapped in a FAIR-aware software infrastructure. This article presents new features of the e!DAL infrastructure software and the PGP repository as a best practice on how to easily set up FAIR-compliant and intuitive research data services. Furthermore, the integration of the ELIXIR Authentication and Authorization Infrastructure (AAI) and data discovery services are introduced as means to lower technical barriers and to increase the visibility of research data. Conclusion The e!DAL software matured to a powerful and FAIR-compliant infrastructure, while keeping the focus on flexible setup and integration into existing infrastructures and into the daily research process.


2011 ◽  
Vol 8 (2) ◽  
pp. 85-94
Author(s):  
Hendrik Mehlhorn ◽  
Falk Schreiber

Summary DBE2 is an information system for the management of biological experiment data from different data domains in a unified and simple way. It provides persistent data storage, worldwide accessibility of the data and the opportunity to load, save, modify, and annotate the data. It is seamlessly integrated in the VANTED system as an add-on, thereby extending the VANTED platform towards data management. DBE2 also utilizes controlled vocabulary from the Ontology Lookup Service to allow the management of terms such as substance names, species names, and measurement units, aiming at an eased data integration.


2019 ◽  
Vol 3 (2) ◽  
pp. 152
Author(s):  
Xianglan Wu

<p>In today's society, the rise of the Internet and rapid development make every day produce a huge amount of data. Therefore, the traditional data processing mode and data storage can not be fully analyzed and mined these data. More and more new information technologies (such as cloud computing, virtualization and big data, etc.) have emerged and been applied, the network has turned from informationization to intelligence, and campus construction has ushered in the stage of smart campus construction.The construction of intelligent campus refers to big data and cloud computing technology, which improves the informatization service quality of colleges and universities by integrating, storing and mining huge data.</p>


2021 ◽  
Vol 2066 (1) ◽  
pp. 012022
Author(s):  
Cheng Luo

Abstract Due to the continuous development of information technology, data has increasingly become the core of the daily operation of enterprises and institutions, the main basis for decision-making development. At the same time, due to the development of network, the storage and management of computer data has attracted more and more attention. Aiming at the common problems of computer data storage and management in practical work, this paper analyzes the object and content of data management, investigates the situation of computer data storage and management in China in recent two years, and interviews and tests the data of programming in this design platform. At the same time, in view of the related problems, the research results are applied to practice. On the basis of big data, the storage and management platform is designed. The research and design adopts a special B+ tree node linear structure of CIRC tree, and the linear node structure is changed into a ring structure, which greatly reduces the number of data persistence instructions and the performance overhead. The results show that compared with the most advanced B+ tree design for nonvolatile memory, crab tree has 3.1 times and 2.5 times performance improvement in reading and writing, respectively. Compared with the previous NV tree designed for nonvolatile memory, it has a performance improvement of 1.5 times, and a performance improvement of 8.4 times compared with the latest fast-fair. In the later stage, the expansion of the platform functions is conducive to the analysis and construction of data related storage and management functions, and further improve the ability of data management.


Author(s):  
Yamini Gourishankar ◽  
Frank Weisgerber

Abstract It is observed that calculating the wind pressures on structures involves more data retrieval from the ASCE standard than any subjective reasoning on the designer’s part. Once the initial design requirements are established, the procedure involved in the computation is straightforward. This paper discusses an approach to automate the process associated with wind pressure computation on one story and multi-story buildings using a data management strategy (implemented using the ORACLE database management system). In the prototype system developed herein, the designer supplies the design requirements in the form of the structure’s exposure type, its dimensions and the nature of occupancy of the structure. Using these requirements, the program retrieves the necessary standards data from an independently maintained database, and computes the wind pressures. The final output contains the wind pressures on the main wind force resisting system, and on the components and claddings, for wind blowing parallel and perpendicular to the ridge. The knowledge encoded in the system was gained from ASCE codes, design guidelines and as a result of interviews with various experts and practitioners. Several information modeling methodologies such as the entity relationship model, IDEF 1X, etc. were employed in the system analysis and design phase of this project. The prototype is implemented on an IBM PC using the ORACLE DBMS and the ‘C’ programming language. Appendix A illustrates a sample run.


Author(s):  
N. Fumai ◽  
C. Collet ◽  
M. Petroni ◽  
K. Roger ◽  
E. Saab ◽  
...  

Abstract A Patient Data Management System (PDMS) is being developed for use in the Intensive Care Unit (ICU) of the Montreal Children’s Hospital. The PDMS acquires real-time patient data from a network of physiological bedside monitors and facilitates the review and interpretation of this data by presenting it as graphical trends, charts and plots on a color video display. Due to the large amounts of data involved, the data storage and data management processes are an important task of the PDMS. The data management structure must integrate varied data types and provide database support for different applications, while preserving the real-time acquisition of network data. This paper outlines a new data management structure which is based primarily on OS/2’s Extended Edition relational database. The relational database design is expected to solve the query shortcomings of the previous data management structure, as well as offer support for security and concurrency. The discussion will also highlight future advantages available from a network implementation.


Sign in / Sign up

Export Citation Format

Share Document