Integrated Monitoring in the Voordelta, The Netherlands

Author(s):  
Niels Kinneging ◽  
Meinte Blaas ◽  
Arjen Boon ◽  
Kees Borst ◽  
Gerrit Hendriksen ◽  
...  

Monitoring of the environmental effects of a harbour extension and the compensation measures is a very complex task. The Voordelta area has high natural values, but is also of high economic importance. To implement a monitoring strategy for this area a multidisciplinary consortium has been formed, consisting of a number of institutes and companies. A central data management facility was set up for data storage and management. This chapter illustrates the data management approach using the Voordelta monitoring programme for the years 2004 to 2013. A central data management facility was set up for data storage and management. A repository gives access to raw data files to all team members. From the analysis of the raw data a number of information products have been developed and disseminated to the authorities and the public through Google Earth. It will be shown, that the presence of a strong multidisciplinary team and good collaboration is the key to success in this complex programme. The way the data have been managed supports this process enormously.

Blood ◽  
2011 ◽  
Vol 118 (21) ◽  
pp. 4763-4763
Author(s):  
William T. Tse ◽  
Kevin K. Duh ◽  
Morris Kletzel

Abstract Abstract 4763 Data collection and analysis in clinical studies in hematology often require the use of specialized databases, which demand extensive information technology (IT) support and are expensive to maintain. With the goal of reducing the cost of clinical trials and promoting outcomes research, we have devised a new informatics framework that is low-cost, low-maintenance, and adaptable to both small- and large-scale clinical studies. This framework is based on the idea that most clinical data are hierarchical in nature: a clinical protocol typically entails the creation of sequential patient files, each of which documents multiple encounters, during which clinical events and data are captured and tagged for later retrieval and analysis. These hierarchical trees of clinical data can be easily stored in a hypertext mark-up language (HTML) document format, which is designed to represent similar hierarchical data on web pages. In this framework, the stored clinical data will be structured according to a web standard called Document Object Model (DOM), for which powerful informatics techniques have been developed to allow efficient retrieval and collation of data from the HTML documents. The proposed framework has many potential advantages. The data will be stored in plain text files in the HTML format, which is both human and machine readable, hence facilitating data exchange between collaborative groups. The framework requires only a regular web browser to function, thereby easing its adoption in multiple institutions. There will be no need to set up or maintain a relational database for data storage, thus minimizing data fragmentation and reducing the demand for IT support. Data entry and analysis will be performed mostly on the client computer, requiring the use of a backend server only for central data storage. Utility programs for data management and manipulation will be written in Javascript and JQuery, computer languages that are free, open-source and easy to maintain. Data can be captured, retrieved, and analyzed on different devices, including desktop computers, tablets or smart phones. Encryption and password protection can be applied in document storage and data transmission to ensure data security and HIPPA compliance. In a pilot project to implement and test this informatics framework, we designed prototype programming modules to perform individual tasks commonly encountered in clinical data management. The functionalities of these modules included user-interface creation, patient data entry and retrieval, visualization and analysis of aggregate results, and exporting and reporting of extracted data. These modules were used to access simulated clinical data stored in a remote server, employing standard web browsers available on all desktop computers and mobile devices. To test the capability of these modules, benchmark tests were performed. Simulated datasets of complete patient records, each with 1000 data items, were created and stored in the remote server. Data were retrieved via the web using a gzip compressed format. Retrieval of 100, 300, 1000 such records took only 1.01, 2.45, and 6.67 seconds using a desktop computer via a broadband connection, or 3.67, 11.39, and 30.23 seconds using a tablet computer via a 3G connection. Filtering of specific data from the retrieved records was equally speedy. Automated extraction of relevant data from 300 complete records for a two-sample t-test analysis took 1.97 seconds. A similar extraction of data for a Kaplan-Meier survival analysis took 4.19 seconds. The program allowed the data to be presented separately for individual patients or in aggregation for different clinical subgroups. A user-friendly interface enabled viewing of the data in either tabular or graphical forms. Incorporation of a new web browser technique permitted caching of the entire dataset locally for off-line access and analysis. Adaptable programming allowed efficient export of data in different formats for regulatory reporting purposes. Once the system was set up, no further intervention from IT department was necessary. In summary, we have designed and implemented a prototype of a new informatics framework for clinical data management, which should be low-cost and highly adaptable to various types of clinical studies. Field-testing of this framework in real-life clinical studies will be the next step to demonstrate its effectiveness and potential benefits. Disclosures: No relevant conflicts of interest to declare.


Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analysed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


2021 ◽  
Vol 11 (15) ◽  
pp. 6881
Author(s):  
Calvin Chung Wai Keung ◽  
Jung In Kim ◽  
Qiao Min Ong

Virtual reality (VR) is quickly becoming the medium of choice for various architecture, engineering, and construction applications, such as design visualization, construction planning, and safety training. In particular, this technology offers an immersive experience to enhance the way architects review their design with team members. Traditionally, VR has used a desktop PC or workstation setup inside a room, yielding the risk of two users bump into each other while using multiuser VR (MUVR) applications. MUVR offers shared experiences that disrupt the conventional single-user VR setup, where multiple users can communicate and interact in the same virtual space, providing more realistic scenarios for architects in the design stage. However, this shared virtual environment introduces challenges regarding limited human locomotion and interactions, due to physical constraints of normal room spaces. This study thus presented a system framework that integrates MUVR applications into omnidirectional treadmills. The treadmills allow users an immersive walking experience in the simulated environment, without space constraints or hurt potentialities. A prototype was set up and tested in several scenarios by practitioners and students. The validated MUVR treadmill system aims to promote high-level immersion in architectural design review and collaboration.


2016 ◽  
Vol 108 (1) ◽  
pp. 441-455 ◽  
Author(s):  
Cinzia Daraio ◽  
Maurizio Lenzerini ◽  
Claudio Leporelli ◽  
Paolo Naggar ◽  
Andrea Bonaccorsi ◽  
...  

GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Daniel Arend ◽  
Patrick König ◽  
Astrid Junker ◽  
Uwe Scholz ◽  
Matthias Lange

Abstract Background The FAIR data principle as a commitment to support long-term research data management is widely accepted in the scientific community. Although the ELIXIR Core Data Resources and other established infrastructures provide comprehensive and long-term stable services and platforms for FAIR data management, a large quantity of research data is still hidden or at risk of getting lost. Currently, high-throughput plant genomics and phenomics technologies are producing research data in abundance, the storage of which is not covered by established core databases. This concerns the data volume, e.g., time series of images or high-resolution hyper-spectral data; the quality of data formatting and annotation, e.g., with regard to structure and annotation specifications of core databases; uncovered data domains; or organizational constraints prohibiting primary data storage outside institional boundaries. Results To share these potentially dark data in a FAIR way and master these challenges the ELIXIR Germany/de.NBI service Plant Genomic and Phenomics Research Data Repository (PGP) implements a “bring the infrastructure to the data” approach, which allows research data to be kept in place and wrapped in a FAIR-aware software infrastructure. This article presents new features of the e!DAL infrastructure software and the PGP repository as a best practice on how to easily set up FAIR-compliant and intuitive research data services. Furthermore, the integration of the ELIXIR Authentication and Authorization Infrastructure (AAI) and data discovery services are introduced as means to lower technical barriers and to increase the visibility of research data. Conclusion The e!DAL software matured to a powerful and FAIR-compliant infrastructure, while keeping the focus on flexible setup and integration into existing infrastructures and into the daily research process.


2011 ◽  
Vol 8 (2) ◽  
pp. 85-94
Author(s):  
Hendrik Mehlhorn ◽  
Falk Schreiber

Summary DBE2 is an information system for the management of biological experiment data from different data domains in a unified and simple way. It provides persistent data storage, worldwide accessibility of the data and the opportunity to load, save, modify, and annotate the data. It is seamlessly integrated in the VANTED system as an add-on, thereby extending the VANTED platform towards data management. DBE2 also utilizes controlled vocabulary from the Ontology Lookup Service to allow the management of terms such as substance names, species names, and measurement units, aiming at an eased data integration.


2021 ◽  
Author(s):  
◽  
Vidette Louise McGregor

<p>Squid fisheries require a different management approach to most fish species which are much longer living. Most squid live for around one year, spawn and then die. The result of this is an entirely new stock each year with little or no relationship of stock sizes between the years. Hence, it is difficult to set appropriate catch limits prior to the season. Currently, there is nothing set up for modelling the New Zealand squid fishery in-season or post-season. In-season management would allow for adjustments of catch limits during a season. Post-season management would provide information on how much the stock was exploited during a season (described as the escapement). I have produced an integrated model using ADMB (Automatic Differentiation Model Builder) (Fournier et al., 2011) which models length frequency data, CPUE (Catch Per Unit Effort) indices and catch weights from a season. It calculates escapement which indicates how much the fishery is currently being exploited. In running the model against data from four area and year combinations, I found the escapement calculation to be stable. The results suggest this modelling approach could be used with the current data collected for post-season modelling of the fishery. I am less confident about in-season modelling with the current data collected. The integrated model fits quite poorly to the CPUE data, suggesting some discrepancy either between the data or the assumptions made of them. Sampling from a greater number of tows is recommended to improve the length frequency data and this may also improve the ability of the model to fit both to these and the CPUE.</p>


2021 ◽  
Author(s):  
◽  
Vidette Louise McGregor

<p>Squid fisheries require a different management approach to most fish species which are much longer living. Most squid live for around one year, spawn and then die. The result of this is an entirely new stock each year with little or no relationship of stock sizes between the years. Hence, it is difficult to set appropriate catch limits prior to the season. Currently, there is nothing set up for modelling the New Zealand squid fishery in-season or post-season. In-season management would allow for adjustments of catch limits during a season. Post-season management would provide information on how much the stock was exploited during a season (described as the escapement). I have produced an integrated model using ADMB (Automatic Differentiation Model Builder) (Fournier et al., 2011) which models length frequency data, CPUE (Catch Per Unit Effort) indices and catch weights from a season. It calculates escapement which indicates how much the fishery is currently being exploited. In running the model against data from four area and year combinations, I found the escapement calculation to be stable. The results suggest this modelling approach could be used with the current data collected for post-season modelling of the fishery. I am less confident about in-season modelling with the current data collected. The integrated model fits quite poorly to the CPUE data, suggesting some discrepancy either between the data or the assumptions made of them. Sampling from a greater number of tows is recommended to improve the length frequency data and this may also improve the ability of the model to fit both to these and the CPUE.</p>


Sign in / Sign up

Export Citation Format

Share Document