scholarly journals Research Data Services at ETH-Bibliothek

IFLA Journal ◽  
2016 ◽  
Vol 42 (4) ◽  
pp. 284-291 ◽  
Author(s):  
Ana Sesartic ◽  
Matthias Töwe

The management of research data throughout its life-cycle is both a key prerequisite for effective data sharing and efficient long-term preservation of data. This article summarizes the data services and the overall approach to data management as currently practised at ETH-Bibliothek, the main library of ETH Zürich, the largest technical university in Switzerland. The services offered by service providers within ETH Zürich cover the entirety of the data life-cycle. The library provides support regarding conceptual questions, offers training and services concerning data publication and long-term preservation. As research data management continues to play a steadily more prominent part in both the requirements of researchers and funders as well as curricula and good scientific practice, ETH-Bibliothek is establishing close collaborations with researchers, in order to promote a mutual learning process and tackle new challenges.

IFLA Journal ◽  
2017 ◽  
Vol 43 (1) ◽  
pp. 5-21 ◽  
Author(s):  
Pierre-Yves Burgi ◽  
Eliane Blumer ◽  
Basma Makhlouf-Shabou

In this article, the authors report on an ongoing data life cycle management national project realized in Switzerland, with a major focus on long-term preservation. Based on an extensive document analysis as well as semi-structured interviews, the project aims at providing national services to respond to the most relevant researchers’ data life cycle management needs, which include: guidelines for establishing a data management plan, active data management solutions, long-term preservation storage options, training, and a single point of access and contact to get support. In addition to presenting the different working axes of the project, the authors describe a strategic management and lean startup template for developing new business models, which is key for building viable services.


2017 ◽  
Vol 78 (5) ◽  
pp. 274 ◽  
Author(s):  
Sarah Barbrow ◽  
Denise Brush ◽  
Julie Goldman

Research in many academic fields today generates large amounts of data. These data not only must be processed and analyzed by the researchers, but also managed throughout the data life cycle. Recently, some academic libraries have begun to offer research data management (RDM) services to their communities. Often, this service starts with helping faculty write data management plans, now required by many federal granting agencies. Libraries with more developed services may work with researchers as they decide how to archive and share data once the grant work is complete.


2016 ◽  
Vol 65 (4/5) ◽  
pp. 226-241 ◽  
Author(s):  
Dimple Patel

Purpose Research data management (RDM) is gaining a lot of momentum in the present day and rightly so. Research data are the core of any research study. The findings and conclusions of a study are entirely dependent on the research data. Traditional publishing did not focus on the presentation of data, along with the publications such as research monographs and especially journal articles, probably because of the difficulties involved in managing the research data sets. The current day technology, however, has helped in making this task easier. The purpose of this paper is to present a conceptual framework for managing research data at the institutional level. Design/methodology/approach This paper discusses the significance and advantages of sharing research data. In the spirit of open access to publications, freeing research data and making it available openly, with minimal restrictions, will help in not only furthering research and development but also avoiding duplication of efforts. The issues and challenges involved in RDM at the institutional level are discussed. Findings A conceptual framework for RDM at the institutional level is presented. A model for a National Repository of Open Research Data (NRORD) is also proposed, and the workflow of the functioning of NRORD is also presented. Originality/value The framework clearly presents the workflow of the data life-cycle in its various phases right from its creation, storage, organization and sharing. It also attempts to address crucial issues in RDM such as data privacy, data security, copyright and licensing. The framework may help the institutions in managing the research data life-cycle in a more efficient and effective manner.


2019 ◽  
Vol 49 (2-3) ◽  
pp. 108-116 ◽  
Author(s):  
Michelle A Krahe ◽  
Julie Toohey ◽  
Malcolm Wolski ◽  
Paul A Scuffham ◽  
Sheena Reilly

Background: Building or acquiring research data management (RDM) capacity is a major challenge for health and medical researchers and academic institutes alike. Considering that RDM practices influence the integrity and longevity of data, targeting RDM services and support in recognition of needs is especially valuable in health and medical research. Objective: This project sought to examine the current RDM practices of health and medical researchers from an academic institution in Australia. Method: A cross-sectional survey was used to collect information from a convenience sample of 81 members of a research institute (68 academic staff and 13 postgraduate students). A survey was constructed to assess selected data management tasks associated with the earlier stages of the research data life cycle. Results: Our study indicates that RDM tasks associated with creating, processing and analysis of data vary greatly among researchers and are likely influenced by their level of research experience and RDM practices within their immediate teams. Conclusion: Evaluating the data management practices of health and medical researchers, contextualised by tasks associated with the research data life cycle, is an effective way of shaping RDM services and support in this group. Implications: This study recognises that institutional strategies targeted at tasks associated with the creation, processing and analysis of data will strengthen researcher capacity, instil good research practice and, over time, improve health informatics and research data quality.


2020 ◽  
Author(s):  
Ivonne Anders ◽  
Andrea Lammert ◽  
Karsten Peters

<p>In 2019 the Universität Hamburg was awarded funding for 4 clusters of excellence in the Excellence Strategy of the Federal and State Governments. One of these clusters funded by the German Research Foundation (DFG) is “CliCCS – Climate, Climatic Change, and Society”. The scientific objectives of CliCCS are achieved within three intertwined research themes, on the Sensitivity and Variability in the Climate System, Climate-Related Dynamics of Social Systems, and Sustainable Adaption Scenarios. Each theme is structured into multiple projects addressing sub-objectives of each theme. More than 200 researchers the Hamburg University, but also other connected research centers and partner institutions  are involved and almost all of them are using but mainly produce new data.</p><p>Research data is produced with great effort and is therefore one of the valuable assets of scientific institutions. It is part of good scientific practice to make research data freely accessible and available in the long term as a transparent basis for scientific statements.</p><p>Within the interdisciplinary cluster of excellence CliCCS, the type of research data is very different. The data range from results from physically dynamic ocean and atmosphere models, to measurement data in the coastal area, to survey and interview data in the field of sociology. </p><p>The German Climate Computing Center (DKRZ) is taking care on the Research Data Management and supports the researchers in creating data management plans, keeping naming conventions or simply finding the optimal repository to publish the data. The goal is to store and long-term archiving of the data, but also to ensure the quality of the data and thus to facilitate potential reuse.</p>


GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Daniel Arend ◽  
Patrick König ◽  
Astrid Junker ◽  
Uwe Scholz ◽  
Matthias Lange

Abstract Background The FAIR data principle as a commitment to support long-term research data management is widely accepted in the scientific community. Although the ELIXIR Core Data Resources and other established infrastructures provide comprehensive and long-term stable services and platforms for FAIR data management, a large quantity of research data is still hidden or at risk of getting lost. Currently, high-throughput plant genomics and phenomics technologies are producing research data in abundance, the storage of which is not covered by established core databases. This concerns the data volume, e.g., time series of images or high-resolution hyper-spectral data; the quality of data formatting and annotation, e.g., with regard to structure and annotation specifications of core databases; uncovered data domains; or organizational constraints prohibiting primary data storage outside institional boundaries. Results To share these potentially dark data in a FAIR way and master these challenges the ELIXIR Germany/de.NBI service Plant Genomic and Phenomics Research Data Repository (PGP) implements a “bring the infrastructure to the data” approach, which allows research data to be kept in place and wrapped in a FAIR-aware software infrastructure. This article presents new features of the e!DAL infrastructure software and the PGP repository as a best practice on how to easily set up FAIR-compliant and intuitive research data services. Furthermore, the integration of the ELIXIR Authentication and Authorization Infrastructure (AAI) and data discovery services are introduced as means to lower technical barriers and to increase the visibility of research data. Conclusion The e!DAL software matured to a powerful and FAIR-compliant infrastructure, while keeping the focus on flexible setup and integration into existing infrastructures and into the daily research process.


Author(s):  
Johannes Hubert Stigler ◽  
Elisabeth Steiner

Research data repositories and data centres are becoming more and more important as infrastructures in academic research. The article introduces the Humanities’ research data repository GAMS, starting with the system architecture to preservation policy and content policy. Challenges of data centres and repositories and the general and domain-specific approaches and solutions are outlined. Special emphasis lies on the sustainability and long-term perspective of such infrastructures, not only on the technical but above all on the organisational and financial level.


2021 ◽  
Vol 16 (1) ◽  
pp. 11
Author(s):  
Klaus Rechert ◽  
Jurek Oberhauser ◽  
Rafael Gieschke

Software and in particular source code became an important component of scientific publications and henceforth is now subject of research data management.  Maintaining source code such that it remains a usable and a valuable scientific contribution is and remains a huge task. Not all code contributions can be actively maintained forever. Eventually, there will be a significant backlog of legacy source-code. In this article we analyse the requirements for applying the concept of long-term reusability to source code. We use simple case study to identify gaps and provide a technical infrastructure based on emulator to support automated builds of historic software in form of source code.  


2012 ◽  
Vol 7 (2) ◽  
pp. 64-67 ◽  
Author(s):  
Neil Beagrie ◽  
Monica Duke ◽  
Catherine Hardman ◽  
Dipak Kalra ◽  
Brian Lavoie ◽  
...  

This paper provides an overview of the KRDS Benefit Analysis Toolkit. The Toolkit has been developed to assist curation activities by assessing the benefits associated with the long-term preservation of research data. It builds on the outputs of the Keeping Research Data Safe (KRDS) research projects and consists of two tools: the KRDS Benefits Framework, and the Value-chain and Benefits Impact tool. Each tool consists of a more detailed guide and worksheet(s). Both tools have drawn on partner case studies and previous work on benefits and impact for digital curation and preservation. This experience has provided a series of common examples of generic benefits that are employed in both tools for users to modify or add to as required.


Author(s):  
Susanne Blumesberger ◽  
Nikos Gänsdorfer ◽  
Raman Ganguly ◽  
Eva Gergely ◽  
Alexander Gruber ◽  
...  

This article gives an overview of the FAIR Data Austria project objectives and current results. In collaboration with our project partners, we work on the development and establishment of tools for managing the lifecycle of research data, including machine-actionable Data Management Plans (maDMPs), repositories for long-term archiving of research results, RDM training and support services, models, and profiles for Data Stewards and FAIR Office Austria.


Sign in / Sign up

Export Citation Format

Share Document