scholarly journals Data policies and data archives as prerequisites of reproducible published research in economics journals

2014 ◽  
Author(s):  
Sven Vlaeminck

>> See video of presentation (19 min.) In economics - as in many other branches of the social sciences- collaboratively working on data and sharing data is not very common, yet. This is also reflected in the professions’ journals, where policies on data management and data sharing currently exist for a small minority of journals only.I would like to introduce the presentation with some empirical results of a survey, in which economists working for the project EDaWaX (European Data Watch, a project funded by the German Research Foundation) analysed the data sharing behaviour of 488 US and European applied economists. Subsequently we give an overview on data policies of journals in economics and business studies. In the course of the EDaWaX project, the data policies in a sample of more than 300 economics journals have been analysed. The talk suggests guidelines for data policies aiming to foster replication of published research and presents some characteristics of journals equipped with those data policies as well as the status quo in disseminating underlying research data of empirically based articles.Against this analytical background the talk identifies some challenges associated with the current e-infrastructure for providing publication-related research data by journals. The presentation also shows a technical solution for some of these challenges. In particular, the talk presents a pilot application for a publication-related data archive for scholarly journals in the social sciences, which has been developed in the first funding phase of the EDaWaX-project. The aim of this open source tool is to empower editors of scholarly journals to easily manage research data for empirically based articles in their journals. The application mainly targets open research data but is also capable of interlinking data and publications even in the case of confidential or proprietary data.In conclusion the talk outlines the further development of our application and sketches other tasks of the project’s second funding phase.More information on the project is available on www.edawax.de  

2015 ◽  
Author(s):  
Peter Weiland ◽  
Ina Dehnhard

See video of the presentation.The benefits of making research data permanently accessible through data archives is widely recognized: costs can be reduced by reusing existing data, research results can be compared and validated with results from archived studies, fraud can be more easily detected, and meta-analyses can be conducted. Apart from that, authors may gain recognition and reputation for producing the datasets. Since 2003, the accredited research data center PsychData (part of the Leibniz Institute for Psychology Information in Trier, Germany) documents and archives research data from all areas of psychology and related fields. In the beginning, the main focus was on datasets that provide a high potential for reuse, e.g. longitudinal studies, large-scale cross sectional studies, or studies that were conducted during historically unique conditions. Presently, more and more journal publishers and project funding agencies require researchers to archive their data and make them accessible for the scientific community. Therefore, PsychData also has to serve this need.In this presentation we report on our experiences in operating a discipline-specific research data archive in a domain where data sharing is met with considerable resistance. We will focus on the challenges for data sharing and data reuse in psychology, e.g.large amount of domain-specific knowledge necessary for data curationhigh costs for documenting the data because of a wide range on non-standardized measuressmall teams and little established infrastructures compared with the "big data" disciplinesstudies in psychology not designed for reuse (in contrast to the social sciences)data protectionresistance to sharing dataAt the end of the presentation, we will provide a brief outlook on DataWiz, a new project funded by the German Research Foundation (DFG). In this project, tools will be developed to support researchers in documenting their data during the research phase.


2011 ◽  
Vol 6 (2) ◽  
pp. 209-221 ◽  
Author(s):  
Huda Khan ◽  
Brian Caruso ◽  
Jon Corson-Rikert ◽  
Dianne Dietrich ◽  
Brian Lowe ◽  
...  

In disciplines as varied as medicine, social sciences, and economics, data and their analyses are essential parts of researchers’ contributions to their respective fields. While sharing research data for review and analysis presents new opportunities for furthering research, capturing these data in digital forms and providing the digital infrastructure for sharing data and metadata pose several challenges. This paper reviews the motivations behind and design of the Data Staging Repository (DataStaR) platform that targets specific portions of the research data curation lifecycle: data and metadata capture and sharing prior to publication, and publication to permanent archival repositories. The goal of DataStaR is to support both the sharing and publishing of data while at the same time enabling metadata creation without imposing additional overheads for researchers and librarians. Furthermore, DataStaR is intended to provide cross-disciplinary support by being able to integrate different domain-specific metadata schemas according to researchers’ needs. DataStaR’s strategy of a usable interface coupled with metadata flexibility allows for a more scaleable solution for data sharing, publication, and metadata reuse.


2018 ◽  
Vol 106 (2) ◽  
Author(s):  
Kevin B. Read ◽  
Liz Amos ◽  
Lisa M. Federer ◽  
Ayaba Logan ◽  
T. Scott Plutchak ◽  
...  

Providing access to the data underlying research results in published literature allows others to reproduce those results or analyze the data in new ways. Health sciences librarians and information professionals have long been advocates of data sharing. It is time for us to practice what we preach and share the data associated with our published research. This editorial describes the activity of a working group charged with developing a research data sharing policy for the Journal of the Medical Library Association.


2015 ◽  
Vol 4 (1) ◽  
pp. 112-129
Author(s):  
Malla Praveen Bhasa

In the past two decades, corporate governance (CG) literature has grown in leaps and bounds. The quick succession with which some corporate scandals surfaced in the early 2000s and their extensive media coverage have prodded the social science researchers to go back to their story boards and examine the reasons for such scandals. Interestingly, corporate behaviour was no more the exclusive preserve of micro-economists and finance researchers. Instead, researchers from different disciplines like philosophy, psychology, sociology and law too joined in examining issues related to what is today popularly known as corporate governance. Each scholar tested hypothesis and offered explanations in a language native to her own discipline. Given the pervasiveness of the social sciences, very soon corporate governance begun to be explained and understood in an increasingly multi-disciplinary perspective. Each discipline brought in its own unique flavour in picking and explaining the nuances of corporate governance. With so many disciplines contributing to a single overarching theme, it is no surprise that today there is a surfeit of corporate governance literature and more continues to get added every single day. This paper reviews the growth and development of CG literature over the past eight decades. In doing so, it studies 1789 published research papers to track how literature organized itself to build the CG discourse.


2020 ◽  
Vol 44 (1-2) ◽  
pp. 1-2
Author(s):  
Harrison Dekker ◽  
Amy Riegelman

As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: Center for Open Science (COS) / Open Science Framework (OSF)[i] Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] CUrating for REproducibility (CURE)[iii] Project Tier[iv] Data Curation Network[v] UK Reproducibility Network[vi] While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). The articles in this issue comprise several very important aspects of reproducible research: Identification of barriers to reproducibility and solutions to such barriers Evidence synthesis as related to transparent reporting and reproducibility Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. Systematic reviews are a research approach that involves the quantitative and/or qualitative synthesis of data collected through a comprehensive literature review.  “Methods reporting that supports reader confidence for systematic reviews in psychology” looks at the reproducibility of electronic literature searches reported in psychology systematic reviews. A fundamental challenge in reproducing or replicating computational results is the need for researchers to make available the code used in producing these results. But sharing code and having it to run correctly for another user can present significant technical challenges. In “Reproducibility, preservation, and access to research with Reprozip, Reproserver” the authors describe open source software that they are developing to address these challenges.  Taking a published article and attempting to reproduce the results, is an exercise that is sometimes used in academic courses to highlight the inherent difficulty of the process. The final article in this issue, “ReprohackNL 2019: How libraries can promote research reproducibility through community engagement” describes an innovative library-based variation to this exercise.   Harrison Dekker, Data Librarian, University of Rhode Island Amy Riegelman, Social Sciences Librarian, University of Minnesota   References Center for Effective Global Action (2020), About the Berkeley Initiative for Transparency in the Social Sciences. Available at: https://www.bitss.org/about (accessed 23 June 2020). Ioannidis, J.P. (2005) ‘Why most published research findings are false’, PLoS Medicine, 2(8), p. e124.  doi:  https://doi.org/10.1371/journal.pmed.0020124   [i] https://osf.io [ii] https://www.bitss.org/ [iii] http://cure.web.unc.edu [iv] https://www.projecttier.org/ [v] https://datacurationnetwork.org/ [vi] https://ukrn.org


2018 ◽  
Vol 4 (2) ◽  
pp. 262-276
Author(s):  
Robert M Hauser

Shared methods, procedures, documentation, and data are essential features of science. This observation is illustrated by autobiographical examples and, far more important, by the history of astronomy, geography, meteorology, and the social sciences. Unfortunately, though sometimes for understandable reasons, data sharing has been less common in psychological and medical research. The China Family Panel Study is an exemplar of contemporary research that has been designed from the outset to create a well-documented body of shared social-scientific data.


Sign in / Sign up

Export Citation Format

Share Document