scholarly journals Introduction to the Special JeSLIB Issue on Data Curation in Practice

2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Cynthia Hudson Vitale ◽  
Jake R. Carlson ◽  
Hannah Hadley ◽  
Lisa Johnston

Research data curation is a set of scientific communication processes and activities that support the ethical reuse of research data and uphold research integrity. Data curators act as key collaborators with researchers to enrich the scholarly value and potential impact of their data through preparing it to be shared with others and preserved for the long term. This special issues focuses on practical data curation workflows and tools that have been developed and implemented within data repositories, scholarly societies, research projects, and academic institutions.

2009 ◽  
Vol 4 (2) ◽  
pp. 12-27 ◽  
Author(s):  
Karen S. Baker ◽  
Lynn Yarmey

Scientific researchers today frequently package measurements and associated metadata as digital datasets in anticipation of storage in data repositories. Through the lens of environmental data stewardship, we consider the data repository as an organizational element central to data curation. One aspect of non-commercial repositories, their distance-from-origin of the data, is explored in terms of near and remote categories. Three idealized repository types are distinguished – local, center, and archive - paralleling research, resource, and reference collection categories respectively. Repository type characteristics such as scope, structure, and goals are discussed. Repository similarities in terms of roles, activities and responsibilities are also examined. Data stewardship is related to care of research data and responsible scientific communication supported by an infrastructure that coordinates curation activities; data curation is defined as a set of repeated and repeatable activities focusing on tending data and creating data products within a particular arena. The concept of “sphere-of-context” is introduced as an aid to distinguishing repository types. Conceptualizing a “web-of-repositories” accommodates a variety of repository types and represents an ecologically inclusive approach to data curation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Lisa-Marie Ohle ◽  
David Ellenberger ◽  
Peter Flachenecker ◽  
Tim Friede ◽  
Judith Haas ◽  
...  

AbstractIn 2001, the German Multiple Sclerosis Society, facing lack of data, founded the German MS Registry (GMSR) as a long-term data repository for MS healthcare research. By the establishment of a network of participating neurological centres of different healthcare sectors across Germany, GMSR provides observational real-world data on long-term disease progression, sociodemographic factors, treatment and the healthcare status of people with MS. This paper aims to illustrate the framework of the GMSR. Structure, design and data quality processes as well as collaborations of the GMSR are presented. The registry’s dataset, status and results are discussed. As of 08 January 2021, 187 centres from different healthcare sectors participate in the GMSR. Following its infrastructure and dataset specification upgrades in 2014, more than 196,000 visits have been recorded relating to more than 33,000 persons with MS (PwMS). The GMSR enables monitoring of PwMS in Germany, supports scientific research projects, and collaborates with national and international MS data repositories and initiatives. With its recent pharmacovigilance extension, it aligns with EMA recommendations and helps to ensure early detection of therapy-related safety signals.


Author(s):  
Johannes Hubert Stigler ◽  
Elisabeth Steiner

Research data repositories and data centres are becoming more and more important as infrastructures in academic research. The article introduces the Humanities’ research data repository GAMS, starting with the system architecture to preservation policy and content policy. Challenges of data centres and repositories and the general and domain-specific approaches and solutions are outlined. Special emphasis lies on the sustainability and long-term perspective of such infrastructures, not only on the technical but above all on the organisational and financial level.


2012 ◽  
Vol 7 (2) ◽  
pp. 64-67 ◽  
Author(s):  
Neil Beagrie ◽  
Monica Duke ◽  
Catherine Hardman ◽  
Dipak Kalra ◽  
Brian Lavoie ◽  
...  

This paper provides an overview of the KRDS Benefit Analysis Toolkit. The Toolkit has been developed to assist curation activities by assessing the benefits associated with the long-term preservation of research data. It builds on the outputs of the Keeping Research Data Safe (KRDS) research projects and consists of two tools: the KRDS Benefits Framework, and the Value-chain and Benefits Impact tool. Each tool consists of a more detailed guide and worksheet(s). Both tools have drawn on partner case studies and previous work on benefits and impact for digital curation and preservation. This experience has provided a series of common examples of generic benefits that are employed in both tools for users to modify or add to as required.


Author(s):  
Majed s Allehaibi

The article presents the arguments concerning tenure in academic institutions. Proponents of tenure argue that it protects professors from social sanctions such as criticism by political or religious powers outside campus that may disagree with the professor’s research findings and thus might pressure the institution to fire him or her. Opponents of tenure argue that the security that comes with tenure allows professors to become incompetent and slothful. After assessing the advantages and disadvantages of tenure, this article concludes that tenure could be an incentive attracting competent faculty members and allowing them to embark on long-term, risky research projects.


2015 ◽  
Vol 11 (4) ◽  
Author(s):  
Grzegorz M. Wójcik ◽  
Piotr Wierzgała ◽  
Anna Gajos

AbstractElectroencephalography (EEG) has become more popular, and as a result, the market grows with new EEG products. The new EEG solutions offer higher mobility, easier application, and lower price. One of such devices that recently became popular is Emotiv EEG. It has been already tested in various applications concerning brain-computer interfaces, neuromarketing, language processing, and detection of the P-300 component, with a general result that it is capable of recording satisfying research data. However, no one has tested and described its usefulness in long-term research. This article presents experience from using Emotiv EEG in two research projects that involved 39 subjects for 22 sessions. Emotiv EEG has significant technical issues concerning the quality of its screw threads. Two complete and successful solutions to this problem are described.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Helenmary Sheridan ◽  
Anthony J. Dellureficio ◽  
Melissa A. Ratajeski ◽  
Sara Mannheimer ◽  
Terrie R. Wheeler

Institutional data repositories are the acknowledged gold standard for data curation platforms in academic libraries. But not every institution can sustain a repository, and not every dataset can be archived due to legal, ethical, or authorial constraints. Data catalogs—metadata-only indices of research data that provide detailed access instructions and conditions for use—are one potential solution, and may be especially suitable for "challenging" datasets. This article presents the strengths of data catalogs for increasing the discoverability and accessibility of research data. The authors argue that data catalogs are a viable alternative or complement to data repositories, and provide examples from their institutions' experiences to show how their data catalogs address specific curatorial requirements. The article also reports on the development of a community of practice for data catalogs and data discovery initiatives.


2014 ◽  
Vol 9 (1) ◽  
pp. 220-230 ◽  
Author(s):  
David Minor ◽  
Matt Critchlow ◽  
Arwen Hutt ◽  
Declan Fleming ◽  
Mary Linn Bergstrom ◽  
...  

In the spring of 2011, the UC San Diego Research Cyberinfrastructure (RCI) Implementation Team invited researchers and research teams to participate in a research curation and data management pilot program. This invitation took the form of a campus-wide solicitation. More than two dozen applications were received and, after due deliberation, the RCI Oversight Committee selected five curation-intensive projects. These projects were chosen based on a number of criteria, including how they represented campus research, varieties of topics, researcher engagement, and the various services required. The pilot process began in September 2011, and will be completed in early 2014. Extensive lessons learned from the pilots are being compiled and are being used in the on-going design and implementation of the permanent Research Data Curation Program in the UC San Diego Library. In this paper, we present specific implementation details of these various services, as well as lessons learned. The program focused on many aspects of contemporary scholarship, including data creation and storage, description and metadata creation, citation and publication, and long term preservation and access. Based on the lessons learned in our processes, the Research Data Curation Program will provide a suite of services from which campus users can pick and choose, as necessary. The program will provide support for the data management requirements from national funding agencies.


Author(s):  
Hagen Peukert

Handling heterogeneous data, subject to minimal costs, can be perceived as a classic management problem. The approach at hand applies established managerial theorizing to the field of data curation. It is argued, however, that data curation cannot merely be treated as a standard case of applying management theory in a traditional sense. Rather, the practice of curating humanities research data, the specifications and adjustments of the model suggested here reveal an intertwined process, in which knowledge of both strategic management and solid information technology have to be considered. Thus, suggestions on the strategic positioning of research data, which can be used as an analytical tool to understand the proposed workflow mechanisms, and the definition of workflow modules, which can be flexibly used in designing new standard workflows to configure research data repositories, are put forward.


2020 ◽  
Author(s):  
David Peterson ◽  
Aaron Panofsky

Emerging out of the “reproducibility crisis” in science, metascientists have become central players in debates about research integrity, scholarly communication, and science policy. The goal of this article is to introduce metascience to STS scholars, detail the scientific ideology that is apparent in its articles, strategy statements, and research projects, and discuss its institutional and intellectual future. Put simply, metascience is a scientific social movement that seeks to use the tools of science- especially, quantification and experimentation- to diagnose problems in research practice and improve efficiency. It draws together data scientists, experimental and statistical methodologists, and open science activists into a project with both intellectual and policy dimensions. Metascientists have been remarkably successful at winning grants, motivating news coverage, and changing policies at science agencies, journals, and universities. Moreover, metascience represents the apotheosis of several trends in research practice, scientific communication, and science governance including increased attention to methodological and statistical criticism of scientific practice, the promotion of “open science” by science funders and journals, the growing importance of both preprint and data repositories for scientific communication, and the new prominence of data scientists as research makes a turn toward Big Science.


Sign in / Sign up

Export Citation Format

Share Document