code sharing
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 36)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Matthew J Page ◽  
Phi-Yen Nguyen ◽  
Daniel G Hamilton ◽  
Neal R Haddaway ◽  
Raju Kanukula ◽  
...  

Objectives: To estimate the frequency of data and code availability statements in a random sample of systematic reviews with meta-analysis of aggregate data, summarise the content of the statements and investigate how often data and code files were shared. Methods: We searched for systematic reviews with meta-analysis of aggregate data on the effects of a health, social, behavioural or educational intervention that were indexed in PubMed, Education Collection via ProQuest, Scopus via Elsevier, and Social Sciences Citation Index and Science Citation Index Expanded via Web of Science during a four-week period (between November 2nd and December 2nd, 2020). Records were randomly sorted and screened independently by two authors until our target sample of 300 systematic reviews was reached. Two authors independently recorded whether a data or code availability statement (or both) appeared in each review and coded the content of the statements using an inductive approach. Results: Of the 300 included systematic reviews with meta-analysis, 86 (29%) had a data availability statement and seven (2%) had both a data and code availability statement. In 12/93 (13%) data availability statements, authors stated that data files were available for download from the journal website or a data repository, which we verified as being true. While 39/93 (42%) authors stated data were available upon request, 37/93 (40%) implied that sharing of data files was not necessary or applicable to them, most often because "all data appear in the article" or "no datasets were generated or analysed". Discussion: Data and code availability statements appear infrequently in systematic review manuscripts. Authors who do provide a data availability statement often incorrectly imply that data sharing is not applicable to systematic reviews. Our results suggest the need for various interventions to increase data and code sharing by systematic reviewers.


Author(s):  
Belén Payán‐Sánchez ◽  
Miguel Pérez‐Valls ◽  
José Antonio Plaza‐Úbeda ◽  
Diego Vázquez‐Brust

Author(s):  
A. A. Barkalov ◽  
L. A. Titarenko ◽  
A. V. Baiev ◽  
A. V. Matviienko
Keyword(s):  

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 491
Author(s):  
Daniel G. Hamilton ◽  
Hannah Fraser ◽  
Fiona Fidler ◽  
Steve McDonald ◽  
Anisa Rowhani-Farid ◽  
...  

Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However, it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher’s policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.


Author(s):  
Christopher J ◽  
Jinwoo Yom ◽  
Changwoo Min ◽  
Yeongjin Jang

Address Space Layout Randomization (ASLR) was a great role model being a light-weight defense technique that could prevent early return-oriented programming attacks. Simple yet effective, ASLR was quickly widely-adopted. Conversely, today only a trickle of defense techniques are being integrated or adopted mainstream. As code reuse attacks have evolved, defenses have strived to keep up. To do so, many have had to take unfavorable tradeoffs like using background threads or protecting only a subset of sensitive code. In reality, these tradeoffs were unavoidable steps necessary to improve the strength of the state-of-the-art. We present Goose, an on-demand system-wide runtime re-randomization technique capable of scalable protection of application as well as shared library code most defenses have forgone. We achieve code sharing with diversification by implementing reactive and scalable, rather than continuous or one-time diversification. Enabling code sharing further removes redundant computation like tracking, patching, along with memory overheads required by prior randomization techniques. In its baseline state, the code transformations needed for Goose security hardening incur a reasonable performance overhead of 5.5% on SPEC and minimal degradation of 4.4% in NGINX, demonstrating its applicability to both compute-intensive and scalable real-world applications. Even when under attack, Goose only adds from less than 1% up to 15% depending on application complexity.


Author(s):  
Rubén López-Nicolás ◽  
José Antonio López-López ◽  
María Rubio-Aparicio ◽  
Julio Sánchez-Meca

AbstractMeta-analysis is a powerful and important tool to synthesize the literature about a research topic. Like other kinds of research, meta-analyses must be reproducible to be compliant with the principles of the scientific method. Furthermore, reproducible meta-analyses can be easily updated with new data and reanalysed applying new and more refined analysis techniques. We attempted to empirically assess the prevalence of transparency and reproducibility-related reporting practices in published meta-analyses from clinical psychology by examining a random sample of 100 meta-analyses. Our purpose was to identify the key points that could be improved, with the aim of providing some recommendations for carrying out reproducible meta-analyses. We conducted a meta-review of meta-analyses of psychological interventions published between 2000 and 2020. We searched PubMed, PsycInfo and Web of Science databases. A structured coding form to assess transparency indicators was created based on previous studies and existing meta-analysis guidelines. We found major issues concerning: completely reproducible search procedures report, specification of the exact method to compute effect sizes, choice of weighting factors and estimators, lack of availability of the raw statistics used to compute the effect size and of interoperability of available data, and practically total absence of analysis script code sharing. Based on our findings, we conclude with recommendations intended to improve the transparency, openness, and reproducibility-related reporting practices of meta-analyses in clinical psychology and related areas.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 491
Author(s):  
Daniel G. Hamilton ◽  
Hannah Fraser ◽  
Fiona Fidler ◽  
Steve McDonald ◽  
Anisa Rowhani-Farid ◽  
...  

Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher’s policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.


2021 ◽  
Author(s):  
Iain Hrynaszkiewicz ◽  
James Harney ◽  
Lauren Cadwallader

Sharing of code supports reproducible research but fewer journals have policies on code sharing compared to data sharing, and there is little evidence on researchers’ attitudes and experiences with code sharing. Before introducing a stronger policy on sharing of code, the Editors and publisher of the journal PLOS Computational Biology wished to test, via an online survey, the suitability of a proposed mandatory code sharing policy with its community of authors. Previous research has established, in 2019, 41% of papers in the journal linked to shared code. We also wanted to understand the potential impact of the proposed policy on authors' submissions to the journal, and their concerns about code sharing.We received 214 completed survey responses, all of whom had generated code in their research previously. 80% had published in PLOS Computational Biology and 88% of whom were based in Europe or North America. Overall, respondents reported they were more likely to submit to the journal if it had a mandatory code sharing policy and US researchers were more positive than the average for all respondents. Researchers whose main discipline is Medicine and Health sciences viewed the proposed policy less favourably, as did the most senior researchers (those with more than 100 publications) compared to early and mid-career researchers.The authors surveyed report that, on average, 71% of their research articles have associated code, and that for the average author, code has not been shared for 32% of these papers. The most common reasons for not sharing code previously are practical issues, which are unlikely to prevent compliance with the policy. A lack of time to share code was the most common reason. 22% of respondents who had not shared their code in the past cited intellectual property (IP) concerns - a concern that might prevent public sharing of code under a mandatory code sharing policy. The results also imply that 18% of the respondents’ previous publications did not have the associated code shared and IP concerns were not cited, suggesting more papers in the journal could share code.To remain inclusive of all researchers in the community, the policy was designed to allow researchers who can demonstrate they are legally restricted from sharing their code to be granted an exemption to public sharing of code.As a secondary goal of the survey we wanted to determine if researchers have unmet needs in their ability to share their own code, and to access other researchers' code. Consistent with our previous research on data sharing, we found potential opportunities for new products or features that support code accessibility or reuse. We found researchers were on average satisfied with their ability to share their own code, suggesting that offering new products or features to support sharing in the absence of a stronger policy would not increase the availability of code with the journal's publications.


2021 ◽  
Vol 17 (3) ◽  
pp. e1008867
Author(s):  
Lauren Cadwallader ◽  
Jason A. Papin ◽  
Feilim Mac Gabhann ◽  
Rebecca Kirk
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document