scholarly journals Enforcing public data archiving policies in academic publishing: A study of ecology journals

2019 ◽  
Vol 6 (1) ◽  
pp. 205395171983625 ◽  
Author(s):  
Dan Sholler ◽  
Karthik Ram ◽  
Carl Boettiger ◽  
Daniel S Katz

To improve the quality and efficiency of research, groups within the scientific community seek to exploit the value of data sharing. Funders, institutions, and specialist organizations are developing and implementing strategies to encourage or mandate data sharing within and across disciplines, with varying degrees of success. Academic journals in ecology and evolution have adopted several types of public data archiving policies requiring authors to make data underlying scholarly manuscripts freely available. The effort to increase data sharing in the sciences is one part of a broader “data revolution” that has prompted discussion about a paradigm shift in scientific research. Yet anecdotes from the community and studies evaluating data availability suggest that these policies have not obtained the desired effects, both in terms of quantity and quality of available datasets. We conducted a qualitative, interview-based study with journal editorial staff and other stakeholders in the academic publishing process to examine how journals enforce data archiving policies. We specifically sought to establish who editors and other stakeholders perceive as responsible for ensuring data completeness and quality in the peer review process. Our analysis revealed little consensus with regard to how data archiving policies should be enforced and who should hold authors accountable for dataset submissions. Themes in interviewee responses included hopefulness that reviewers would take the initiative to review datasets and trust in authors to ensure the completeness and quality of their datasets. We highlight problematic aspects of these thematic responses and offer potential starting points for improvement of the public data archiving process.

Publications ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 25
Author(s):  
Brian Jackson

Journal publishers play an important role in the open research data ecosystem. Through open data policies that include public data archiving mandates and data availability statements, journal publishers help promote transparency in research and wider access to a growing scholarly record. The library and information science (LIS) discipline has a unique relationship with both open data initiatives and academic publishing and may be well-positioned to adopt rigorous open data policies. This study examines the information provided on public-facing websites of LIS journals in order to describe the extent, and nature, of open data guidance provided to prospective authors. Open access journals in the discipline have disproportionately adopted detailed, strict open data policies. Commercial publishers, which account for the largest share of publishing in the discipline, have largely adopted weaker policies. Rigorous policies, adopted by a minority of journals, describe the rationale, application, and expectations for open research data, while most journals that provide guidance on the matter use hesitant and vague language. Recommendations are provided for strengthening journal open data policies.


2020 ◽  
Vol 17 ◽  
pp. 15-19
Author(s):  
Bishnu Bahadur Khatri

Peer review in scholarly communication and scientific publishing, in one form or another, has always been regarded as crucial to the reputation and reliability of scientific research. In the growing interest of scholarly research and publication, this paper tries to discuss about peer review process and its different types to communicate the early career researchers and academics.This paper has used the published and unpublished documents for information collection. It reveals that peer review places the reviewer, with the author, at the heart of scientific publishing. It is the system used to assess the quality of scientific research before it is published. Therefore, it concludes that peer review is used to advancing and testing scientific knowledgeas a quality control mechanism forscientists, publishers and the public.


Author(s):  
Kerina Jones ◽  
Sharon Heys ◽  
Helen Daniels

IntroductionMany jurisdictions have programmes for the large-scale reuse of health and administrative data that would benefit from greater cross-centre working. The Advancing Cross centre Research Networks (ACoRN) project considered barriers and drivers for joint working and information sharing using the UK Farr Institute as a case study, and applicable widely. Objectives and ApproachACoRN collected information from researchers, analysts, academics and the public to gauge the acceptability of sharing data across institutions and jurisdictions. It considered international researcher experiences and evidence from a variety of cross centre projects to reveal barriers and potential solutions to joint working. It reviewed the legal and regulatory provisions that surround data sharing and cross-centre working, including issues of information governance to provide the context and backdrop. The emerging issues were grouped into five themes and used to propose a set of recommendations. ResultsThe five themes identified were: organisational structures and legal entities; people and culture; information governance; technology and infrastructure; and finance and strategic planning. Recommendations within these included: standardised terms and conditions including agreements and contractual templates; performance indicators for frequency of dataset sharing; communities of practice and virtual teams to develop cooperation; standardised policies and procedures to underpin data sharing; an accredited quality seal for organisations sharing data; a dashboard for data availability and sharing; and adequate resource to move towards greater uniformity and to drive data sharing initiatives. Conclusion/ImplicationsThe challenges posed by cross-centre information sharing are considerable but the public benefits associated with the greater use of health and administrative data are inestimable, particularly as novel and emerging data become increasingly available. The proposed recommendations will assist in achieving the benefits of cross-centre working.


2020 ◽  
pp. 1-11
Author(s):  
Ruth D. Carlitz ◽  
Rachael McLellan

Data availability has long been a challenge for scholars of authoritarian politics. However, the promotion of open government data—through voluntary initiatives such as the Open Government Partnership and soft conditionalities tied to foreign aid—has motivated many of the world’s more closed regimes to produce and publish fine-grained data on public goods provision, taxation, and more. While this has been a boon to scholars of autocracies, we argue that the politics of data production and dissemination in these countries create new challenges. Systematically missing or biased data may jeopardize research integrity and lead to false inferences. We provide evidence of such risks from Tanzania. The example also shows how data manipulation fits into the broader set of strategies that authoritarian leaders use to legitimate and prolong their rule. Comparing data released to the public on local tax revenues with verified internal figures, we find that the public data appear to significantly underestimate opposition performance. This can bias studies on local government capacity and risk parroting the party line in data form. We conclude by providing a framework that researchers can use to anticipate and detect manipulation in newly available data.


2018 ◽  
Vol 18 (1) ◽  
pp. 129-151
Author(s):  
Diane H. Roberts

ABSTRACT This paper explores the contribution of the AAA Public Interest Section academic journal, Accounting and the Public Interest, to socially responsive and responsible accounting scholarship. Contributors, their doctoral-granting schools, institutional affiliation at time of publication, and their research topics in the first 15 volumes were analyzed. Source literature is explored through analysis of references. Citation analysis performed using Google Scholar's advanced search function revealed strong citation of papers published in API, both in terms of numbers of citations and quality of citing journals. Overall the study results indicate API is a high-quality publication and the journal is fulfilling its mission to provide an outlet for innovative research through use of alternative theories and methodologies. Data Availability: Data are available from the public sources cited in the text.


2020 ◽  
Vol 39 (2) ◽  
pp. 117-138
Author(s):  
Jared Eutsler

SUMMARY Existing research has found that the PCAOB inspection results of small (triennially inspected) audit firms provide incremental information about audit quality, but research has not documented a similar finding for large (annually inspected) firms. I examine the generalizability of annually inspected firms' inspection findings to audit quality by investigating the association between account-specific findings and account-specific audit quality while controlling for the PCAOB's risk-based program. First, I create a selection model to approximate the risk-based inspection process. I then use its outputs to control for selection risk while examining the association between revenue-specific deficiencies and the audit quality of revenues. I find that after controlling for selection risk, revenue-specific deficiencies are generalizable to the audit quality of revenues for clients that are more likely to be inspected. These results provide some evidence that the PCAOB's inspection program is meeting its objective of providing relevant feedback to stakeholders about audit quality. Data Availability: Data are available from the public sources described in this text.


2019 ◽  
Vol 12 (6) ◽  
pp. 2215-2225
Author(s):  

Abstract. Version 1.1 of the editorial of Geoscientific Model Development (GMD), published in 2015 (GMD Executive Editors, 2015), introduced clarifications to the policy on publication of source code and input data for papers published in the journal. Three years of working with this policy has revealed that it is necessary to be more precise in the requirements of the policy and in the narrowness of its exceptions. Furthermore, the previous policy was not specific in the requirements for suitable archival locations. Best practice in code and data archiving continues to develop and is far from universal among scientists. This has resulted in many manuscripts requiring improvement in code and data availability practice during the peer-review process. New researchers continually start their professional lives, and it remains the case that not all authors fully appreciate why code and data publication is necessary. This editorial provides an opportunity to explain this in the context of GMD. The changes in the code and data policy are summarised as follows: The requirement for authors to publish source code, unless this is impossible for reasons beyond their control, is clarified. The minimum requirements are strengthened such that all model code must be made accessible during the review process to the editor and to potentially anonymous reviewers. Source code that can be made public must be made public, and embargoes are not permitted. Identical requirements exist for input data and model evaluation data sets in the model experiment descriptions. The scope of the code and data required to be published is described. In accordance with Copernicus' own data policy, we now specifically strongly encourage all code and data used in any analyses be made available. This will have particular relevance for some model evaluation papers where editors may now strongly request this material be made available. The requirements of suitable archival locations are specified, along with the recommendation that Zenodo is often a good choice. In addition, since the last editorial, an “Author contributions” section must now be included in all manuscripts.


Author(s):  
Mi-Ja Woo ◽  
Jerome P. Reiter ◽  
Anna Oganian ◽  
Alan F. Karr

When releasing microdata to the public, data disseminators typically alter the original data to protect the confidentiality of database subjects' identities and sensitive attributes. However, such alteration negatively impacts the utility (quality) of the released data. In this paper, we present quantitative measures of data utility for masked microdata, with the aim of improving disseminators' evaluations of competing masking strategies. The measures, which are global in that they reflect similarities between the entire distributions of the original and released data, utilize empirical distribution estimation, cluster analysis, and propensity scores. We evaluate the measures using both simulated and genuine data. The results suggest that measures based on propensity score methods are the most promising for general use.


Author(s):  
Kimberlyn McGrail ◽  
Michael Burgess ◽  
Kieran O'Doherty ◽  
Colene Bentley ◽  
Jack Teng

IntroductionResearch using linked data sets can lead to new insights and discoveries that positively impact society. However, the use of linked data raises concerns relating to illegitimate use, privacy, and security (e.g., identity theft, marginalization of some groups). It is increasingly recognized that the public needs to be consulted to develop data access systems that consider both the potential benefits and risks of research. Indeed, there are examples of data sharing projects being derailed because of backlash in the absence of adequate consultation. (e.g., care.data in the UK). Objectives and methodsThis talk will describe the results of public deliberations held in Vancouver, British Columbia in April 2018 and the fall of 2019. The purpose of these events was to develop informed and civic-minded public advice regarding the use and the sharing of linked data for research in the context of rapidly evolving data availability and researcher aspirations. ResultsIn the first deliberation, participants developed and voted on 19 policy-relevant statements. Taken together, these statements provide a broad view of public support and concerns regarding the use of linked data sets for research and offer guidance on measures that can be taken to improve the trustworthiness of policies and process around data sharing and use. The second deliberation will focus on the interplay between public and private sources of data, and role of individual and collective or community consent I the future. ConclusionGenerally, participants were supportive of research using linked data because of the value such uses can provide to society. Participants expressed a desire to see the data access request process made more efficient to facilitate more research, as long as there are adequate protections in place around security and privacy of the data. These protections include both physical and process-related safeguards as well as a high degree of transparency.


Ravnetrykk ◽  
2020 ◽  
Author(s):  
Aysa Ekanger ◽  
Solveig Enoksen

How can a library publishing service with limited resources help editorial teams of peer-reviewed journals in their work? This paper focuses on the technical aspects of the peer review workflow that, if set up and adhered to properly, can contribute to improving the standard of the peer review process – and to some degree also the quality of peer review. The discussion is based on the work done at Septentrio Academic Publishing, the institutional service provider for open access publishing at UiT The Arctic University of Norway.


Sign in / Sign up

Export Citation Format

Share Document