Using Unmanned Surface Vehicles for Harbor Security and Disaster Mitigation and Relief: Special Topic 5: Best Practices in Sensor Design and Use, Systems Operations and Data Management

Author(s):  
Stephen Ferretti ◽  
Neil Zerbe
Neuroforum ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Michael Denker ◽  
Sonja Grün ◽  
Thomas Wachtler ◽  
Hansjörg Scherberger

Abstract Preparing a neurophysiological data set with the aim of sharing and publishing is hard. Many of the available tools and services to provide a smooth workflow for data publication are still in their maturing stages and not well integrated. Also, best practices and concrete examples of how to create a rigorous and complete package of an electrophysiology experiment are still lacking. Given the heterogeneity of the field, such unifying guidelines and processes can only be formulated together as a community effort. One of the goals of the NFDI-Neuro consortium initiative is to build such a community for systems and behavioral neuroscience. NFDI-Neuro aims to address the needs of the community to make data management easier and to tackle these challenges in collaboration with various international initiatives (e.g., INCF, EBRAINS). This will give scientists the opportunity to spend more time analyzing the wealth of electrophysiological data they leverage, rather than dealing with data formats and data integrity.


2018 ◽  
Vol 72 (3) ◽  
pp. 332-337
Author(s):  
Deb Autor ◽  
Zena Kaufman ◽  
Ron Tetzlaff ◽  
Maryann Gribbin ◽  
Madlene Dole ◽  
...  

2020 ◽  
Author(s):  
Paolo Oliveri ◽  
SImona Simoncelli ◽  
Pierluigi DI Pietro ◽  
Sara Durante

<p>One of the main challenges for the present and future in ocean observations is to find best practices for data management: infrastructures like Copernicus and SeaDataCloud already take responsibility for assembly, archive, update and publish data. Here we present the strengths and weaknesses in a SeaDataCloud Temperature and Salinity time series data collections, in particular a tool able to recognize the different devices and platforms and to merge them with processed Copernicus platforms.</p><p>While Copernicus has the main target to quickly acquire and publish data, SeaDataNet aims to publish data with the best quality available. This two data repository should be considered together, since the originator can ingest the data in both the infrastructures or only in one, or partially in both. This results sometimes in data partially available in Copernicus or SeaDataCloud, with great impact for the researcher who wants to access as much data as possible. The data reprocessing should not be loaded on researchers' shoulders, since only skilled users in all data management plan know how merge the data.</p><p>The SeaDataCloud time series data collections is a Global Ocean soon-to-be-published dataset that will represent a reference for ocean researchers, released in binary, user friendly Ocean Data View format. The database management plan was originally for profiles, but had been adapted for time series, resolving several issues like the uniqueness of the identifiers (ID).</p><p>Here we present an extension of the SOURCE (Sea Observations Utility for Reprocessing. Calibration and Evaluation) Python package, able to enhance the data quality with redundant sophisticated methods and simplify their usage. </p><p>SOURCE increases quality control (Q/C) performances on observations using statistical quality check procedures that follows the ocean best practices guidelines, exploiting the following  issues:</p><ol><li>Find and aggregate all broken time series using likeness in ID parameter strings;</li> <li>Find and organize in a dictionary all different metadata variables;</li> <li>Correct time series time to match simpler measure units;</li> <li>Filter devices that are outside of a selected horizontal rectangle;</li> <li>Give some information on original Q/C scheme by SeaDataCloud infrastructure;</li> <li>Give information tables on platforms and on the merged ID string duplicates together with an errors log file (missing time, depth, data, wrong Q/C variables, etc.).</li> </ol><p>In particular, the duplicates table and the log file may be helpful to SeaDataCloud partners in order to update the data collection and make it finally available for the users.</p><p>The reconstructed SeaDataCloud time series data, divided by parameter and stored in a more flexible dataset, give the possibility to ingest it in the main part of the software, allowing to compare it with Copernicus time series, find the same platform using horizontal and vertical surroundings (without looking to ID) find and cleanup  duplicated data, merge the two databases to extend the data coverage.</p><p>This allow researchers to have the most wide and the best quality possible data for the final users release and to to use these data to calibrate and validate models, in order to reach an idea of a whole area sea conditions.</p>


2021 ◽  
Author(s):  
Dennis Muiruri ◽  
Lucy Ellen Lwakatare ◽  
Jukka K. Nurminen ◽  
Tommi Mikkonen

<div> <div> <div> <p>The best practices and infrastructures for developing and maintaining machine learning (ML) enabled software systems are often reported by large and experienced data-driven organizations. However, little is known about the state of practice across other organizations. Using interviews, we investigated practices and tool-chains for ML-enabled systems from 16 organizations in various domains. Our study makes three broad observations related to data management practices, monitoring practices and automation practices in ML model training, and serving workflows. These have limited number of generic practices and tools applicable across organizations in different domains. </p> </div> </div> </div>


10.29173/iq12 ◽  
2017 ◽  
Vol 41 (1-4) ◽  
pp. 12
Author(s):  
Bhojaraju Gunjal ◽  
Panorea Gaitanou

This paper attempts to present a brief overview of several Research Data Management (RDM) issues and a detailed literature review regarding the RDM aspects adopted in libraries globally. Furthermore, it will describe several tendencies concerning the management of repository tools for research data, as well as the challenges in implementing the RDM. The proper planned training and skill development for all stakeholders by mentors to train both staff and users are some of the issues that need to be considered to enhance the RDM process. An effort will be also made to present the suitable policies and workflows along with the adoption of best practices in RDM, so as to boost the research process in an organisation. This study will showcase the implementation of RDM processes in the Higher Educational Institute of India, referring particularly to the Central Library @ NIT Rourkela in Odisha, India with a proposed framework. Finally, this study will also propose an area of opportunities that can boost research activities in the Institute.


Author(s):  
G. L. Milne

Leaking joints are a main cause of hydrocarbon releases on United Kingdom Continental Shelf (UKCS) offshore sites. The consequential costs of shutdowns and repair can be very high. There are other significant risks, notably to occupational safety, major incident safety and the environment. Fundamental to joint integrity is the competence of the personnel involved. Leak data indicates that poor joint make-up is a major cause of leaks and a review of the causes confirms that the current skills and practices do not give leak-free joints. Therefore the most important element of a management system is to have competent people working on joints. A competence assurance process should be established where the level of training, assessment and experience required is dependant on the potential severity of a release. The results of this should be that that all joints are made up by personnel with an appropriate level of competence. Control of the competence of people working with joints is the most important factor in preventing leaks. There are many ways to influence the integrity of a pipe joint, particularly during design, procurement, fabrication and any intrusive work. A Management System should include details of best practices that are available, with a guide to when and where they should be used, and clarification on tightening methods. Most of these best practices already exist either as industry or company documents but may not be used effectively. • The management system should improve both their visibility and their use, and ensure capture and transparency of all specific historical joint data. Each operator should positively and effectively manage the integrity of bolted joints. It is expected that this will be built on a process of continuous improvement. The essential elements of such a management system are: • Ownership: There should be an identified owner of the management system, responsible not only for its implementation and ongoing maintenance, but also for communicating its aims and objectives throughout the organisation. The owner should state the expectations for the system and monitor its effectiveness. • Awareness: Everyone with an influence on joint integrity in the organisation should be aware of the management system, its objectives, expectations and effects on day-to-day working. Good awareness needs to be maintained. • Tools: A set of implementation tools is required to ensure that the expectations can be met. These should include risk assessment, competence management and control of the practices used. These are discussed in more detail later in this document. • Records and Data Management: The certainty of a successful joint being made up increases if historical data exists on the activities carried out in the past. Recording traceable data encourages best practice at the time of the activity, and will provide useful planning data for the next time the joint is disturbed. • Learning: Learning from incidents is important. A management system should include the means for gathering relevant data, which should be collected by operations engineers or technicians, and periodically reviewed to establish trends, performance and improvements. • Measurement: Easily monitored, but meaningful, performance standards should be put in place at launch to quantify the contribution being made by the management system and evaluate user satisfaction. Examples include: • The number of recorded leaks during testing and start-up; • Percentage leak reduction attributable to the use of the management system.


Author(s):  
Mitchell Wolf ◽  
Geovany Trejos ◽  
Maia Hoeberechts ◽  
Ryan Flagg ◽  
Reyna Jenkyns ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document