scholarly journals The Nansen Legacy Data Management Plan

Author(s):  
The Nansen Legacy
2018 ◽  
Author(s):  
Dasapta Erwin Irawan

Here's the official ITB Research Data Management Plan. We use this plan as a template to design more detailed project-level RDMP. The document came from the work of ITB Repository Team that I lead. Team members: Sparisoma Viridi, Rino Mukti (I will add this list later). I invite everyone to re-use this document for their project-level RDMP.


2021 ◽  
Author(s):  
Michael Russell ◽  
Vincent Paquit ◽  
Luke Scime ◽  
Alka Singh

2019 ◽  
Vol 56 (1) ◽  
pp. 481-485
Author(s):  
Victoria Stodden ◽  
Vicki Ferrini ◽  
Margaret Gabanyi ◽  
Kerstin Lehnert ◽  
John Morton ◽  
...  

2019 ◽  
Vol 42 (8) ◽  
pp. 640-648 ◽  
Author(s):  
Marcy G. Antonio ◽  
Kara Schick-Makaroff ◽  
James M. Doiron ◽  
Laurene Sheilds ◽  
Lacie White ◽  
...  

Data repositories can support secure data management for multi-institutional and geographically dispersed research teams. Primarily designed to provide secure access, storage, and sharing of quantitative data, limited focus has been given to the unique considerations of data repositories for qualitative research. We share our experiences of using a data repository in a large qualitative nursing research study. Over a 27-month period, data collected by this 15-member team from 83 participants included photos, audio recordings and transcripts of interviews, and field notes. The data repository supported the secure collection, storage, and management of over 1,800 files with data. However, challenges were introduced during analysis that required negotiations about the structure and processes of the data repository. We discuss strengths and limitations of data repositories, and introduce practical strategies for developing a data management plan for qualitative research, which is supported through a data repository.


2020 ◽  
Author(s):  
Paolo Oliveri ◽  
SImona Simoncelli ◽  
Pierluigi DI Pietro ◽  
Sara Durante

<p>One of the main challenges for the present and future in ocean observations is to find best practices for data management: infrastructures like Copernicus and SeaDataCloud already take responsibility for assembly, archive, update and publish data. Here we present the strengths and weaknesses in a SeaDataCloud Temperature and Salinity time series data collections, in particular a tool able to recognize the different devices and platforms and to merge them with processed Copernicus platforms.</p><p>While Copernicus has the main target to quickly acquire and publish data, SeaDataNet aims to publish data with the best quality available. This two data repository should be considered together, since the originator can ingest the data in both the infrastructures or only in one, or partially in both. This results sometimes in data partially available in Copernicus or SeaDataCloud, with great impact for the researcher who wants to access as much data as possible. The data reprocessing should not be loaded on researchers' shoulders, since only skilled users in all data management plan know how merge the data.</p><p>The SeaDataCloud time series data collections is a Global Ocean soon-to-be-published dataset that will represent a reference for ocean researchers, released in binary, user friendly Ocean Data View format. The database management plan was originally for profiles, but had been adapted for time series, resolving several issues like the uniqueness of the identifiers (ID).</p><p>Here we present an extension of the SOURCE (Sea Observations Utility for Reprocessing. Calibration and Evaluation) Python package, able to enhance the data quality with redundant sophisticated methods and simplify their usage. </p><p>SOURCE increases quality control (Q/C) performances on observations using statistical quality check procedures that follows the ocean best practices guidelines, exploiting the following  issues:</p><ol><li>Find and aggregate all broken time series using likeness in ID parameter strings;</li> <li>Find and organize in a dictionary all different metadata variables;</li> <li>Correct time series time to match simpler measure units;</li> <li>Filter devices that are outside of a selected horizontal rectangle;</li> <li>Give some information on original Q/C scheme by SeaDataCloud infrastructure;</li> <li>Give information tables on platforms and on the merged ID string duplicates together with an errors log file (missing time, depth, data, wrong Q/C variables, etc.).</li> </ol><p>In particular, the duplicates table and the log file may be helpful to SeaDataCloud partners in order to update the data collection and make it finally available for the users.</p><p>The reconstructed SeaDataCloud time series data, divided by parameter and stored in a more flexible dataset, give the possibility to ingest it in the main part of the software, allowing to compare it with Copernicus time series, find the same platform using horizontal and vertical surroundings (without looking to ID) find and cleanup  duplicated data, merge the two databases to extend the data coverage.</p><p>This allow researchers to have the most wide and the best quality possible data for the final users release and to to use these data to calibrate and validate models, in order to reach an idea of a whole area sea conditions.</p>


Sign in / Sign up

Export Citation Format

Share Document