scholarly journals Best Practices in Data Analysis and Sharing in Neuroimaging using MEEG

Author(s):  
Cyril R Pernet ◽  
Marta Garrido ◽  
Alexandre Gramfort ◽  
Natasha Maurits ◽  
Christoph Michel ◽  
...  

Non-invasive neuroimaging methods, including magnetoencephalography and electroencephalography (MEEG), have been critical in advancing the understanding of brain function in healthy people and in individuals with neurological or psychiatric disorders. Currently, scientific practice is undergoing a tremendous change, aiming to improve both research reproducibility and transparency in data collection, documentation and analysis, and in manuscript review. To advance the practice of open science, the Organization for Human Brain Mapping created the Committee on Best Practice in Data Analysis and Sharing (COBIDAS), which produced a report for MRI-based data in 2016. This effort continues with the OHBM’s COBIDAS MEEG committee whose task was to create a similar document that describes best practice recommendations for MEEG data. The document was drafted by OHBM experts in MEEG, with input from the world-wide brain imaging community, including OHBM members who volunteered to help with this effort, as well as Executive Committee members of the International Federation for Clinical Neurophysiology. This document outlines the principles of performing open and reproducible research in MEEG. Not all MEEG data practices are described in this document. Instead, we propose principles that we believe are current best practice for most recordings and common analyses. Furthermore, we suggest reporting guidelines for Authors that will enable others in the field to fully understand and potentially replicate any study. This document should be helpful to Authors, Reviewers of manuscripts, as well as Editors of neuroscience journals.

2016 ◽  
Author(s):  
Thomas E. Nichols ◽  
Samir Das ◽  
Simon B. Eickhoff ◽  
Alan C. Evans ◽  
Tristan Glatard ◽  
...  

AbstractNeuroimaging enables rich noninvasive measurements of human brain activity, but translating such data into neuroscientific insights and clinical applications requires complex analyses and collaboration among a diverse array of researchers. The open science movement is reshaping scientific culture and addressing the challenges of transparency and reproducibility of research. To advance open science in neuroimaging the Organization for Human Brain Mapping created the Committee on Best Practice in Data Analysis and Sharing (COBIDAS), charged with creating a report that collects best practice recommendations from experts and the entire brain imaging community. The purpose of this work is to elaborate the principles of open and reproducible research for neuroimaging using Magnetic Resonance Imaging (MRI), and then distill these principles to specific research practices. Many elements of a study are so varied that practice cannot be prescribed, but for these areas we detail the information that must be reported to fully understand and potentially replicate a study. For other elements of a study, like statistical modelling where specific poor practices can be identified, and the emerging areas of data sharing and reproducibility, we detail both good practice and reporting standards. For each of seven areas of a study we provide tabular listing of over 100 items to help plan, execute, report and share research in the most transparent fashion. Whether for individual scientists, or for editors and reviewers, we hope these guidelines serve as a benchmark, to raise the standards of practice and reporting in neuroimaging using MRI.


2016 ◽  
Vol 52 (87) ◽  
pp. 12792-12805 ◽  
Author(s):  
D. Brynn Hibbert ◽  
Pall Thordarson

The failure of the Job plot, best-practice in uncertainty estimation in host–guest binding studies and an open access webportal for data analysis are reviewed in this Feature Article.


Author(s):  
Michael Trizna ◽  
Leah Wasser ◽  
David Nicholson

pyOpenSci (short for Python Open Science), funded by the Alfred P. Sloan Foundation, is building a diverse community that supports well documented, open source Python software that enables open reproducible science. pyOpenSci will work with the community to openly develop best practice guidelines and open standards for scientific Python software, which will be reinforced through a community-led peer review process and training. Packages that complete the peer review process become a part of the pyOpenSci ecosystem, where maintenance can be shared to ensure longevity and stability in code. pyOpenSci packages are also eligible for a “fast tracked” acceptance to JOSS (Journal of Open Source Software). In addition, we provide review for open science tools that would be of interest to TDWG members but are not within scope for JOSS, such as API (Application Programming Interface) wrappers. pyOpenSci is built on top of the successful model of rOpenSci, founded in 2011, which has fostered the development of several useful biodiversity informatics R packages. The pyOpenSci team looks to following the lessons learned by rOpenSci, to create a similarly successful community. We invite TDWG members developing open source software tools in Python to become part of the pyOpenSci community.


2021 ◽  
Author(s):  
Louis Krieger ◽  
Remko Nijzink ◽  
Gitanjali Thakur ◽  
Chandrasekhar Ramakrishnan ◽  
Rok Roskar ◽  
...  

<p>Good scientific practice requires good documentation and traceability of every research step in order to ensure reproducibility and repeatability of our research. However, with increasing data availability and ability to record big data, experiments and data analysis become more complex. This complexity often requires many pre- and post-processing steps that all need to be documented for reproducibility of final results. This poses very different challenges for numerical experiments, laboratory work and field-data analysis. The platform Renku (https://renkulab.io/), developed by the Swiss Data Science Center, aims at facilitating reproducibility and repeatability of all these scientific workflows. Renku stores all data, code and scripts in an online repository, and records in their history how these files are generated, interlinked and modified. The linkages between files (inputs, code and outputs) lead to the so-called <span>knowledge graph, used to record the provenance of results and connecting those with all other relevant entities in the project.</span></p><p>We will discuss here several use examples, including mathematical analysis, laboratory experiments, data analysis and numerical experiments, all related to scientific projects presented separately. Reproducibility of mathematical analysis is facilitated by clear variable definitions and a computer algebra package that enables reproducible symbolic derivations. We will present the use of the Python package ESSM (https://essm.readthedocs.io) for this purpose, and how it can be integrated into a Renku workflow. Reproducibility of laboratory results is facilitated by tracking of experimental conditions for each data record and instrument re-calibration activities, mainly through Jupyter notebooks. Data analysis based on different data sources requires the preservation of links to external datasets and snapshots of the dataset versions imported into the project, that is facilitated by Renku. Renku also takes care of clear links between input, code and output of large numerical experiments, our last use example, and enables systematic updating if any of the input or code files are changed.</p><p>These different examples demonstrate how Renku can assist in documenting the scientific process from input to output and the final paper. All code and data are directly available online, and the recording of the workflows ensures reproducibility and repeatability.</p>


2021 ◽  
Author(s):  
Lauryn Hagg ◽  
Stephanie S Merkouris ◽  
Gypsy A O’Dea ◽  
Lauren M Francis ◽  
Christopher J Greenwood ◽  
...  

BACKGROUND Background: Latent Dirichlet Allocation (LDA) is a tool for rapidly synthesising meaning from ‘big data’, but outputs can be sensitive to decisions made during the analytic pipeline. This review will focus on the complex analytical practices specific to LDA, which existing practical guides for conducting LDA have not addressed. OBJECTIVE Objectives: This scoping review will use key analytical steps (data selection, data pre-processing, and data analysis) as a framework to understand the methodological approaches being used in psychology research utilising LDA. METHODS Methods: Four psychology and health databases were searched. Studies were included if they used LDA to analyse written words and focussed on a psychological construct/issue. The data charting processes was constructed and employed based on common data selection, pre-processing, and data analysis steps. RESULTS Results: Forty-seven studies were included. These explored a range of research areas and most sourced their data from social media platforms. While some studies reported on pre-processing and data analytic steps taken, most studies did not provide sufficient detail for reproducibility. Furthermore, debate surrounding the necessity of certain pre-processing and data analysis steps is revealed. CONCLUSIONS Conclusions: Findings highlight the growing use of LDA in psychological science. However, there is a need to improve analytical reporting standards, and identify comprehensive and evidence based best practice recommendations. To work towards this, we have developed an LDA Preferred Reporting Checklist which will allow for consistent documentation of LDA analytic decisions, and reproducible research outcomes.


Author(s):  
Joshua Biro ◽  
David M. Neyens ◽  
Candace Jaruzel ◽  
Catherine D. Tobin ◽  
Myrtede Alfred ◽  
...  

Medication errors and error-related scenarios in anesthesia remain an important area of research. Interventions and best practice recommendations in anesthesia are often based in the work-as-imagined healthcare system, remaining under-used due to a range of unforeseen complexities in healthcare work-as- done. In order to design adaptable anesthesia medication delivery systems, a better understanding of clinical cognition within the context of anesthesia work is needed. Fourteen interviews probing anesthesia providers’ decision making were performed. The results revealed three overarching themes: (1) anesthesia providers find cases challenging when they have incomplete information, (2) decision-making begins with information seeking, and (3) attributes such as expertise, experience, and work environment influence anesthesia providers’ information seeking and synthesis of tasks. These themes and the context within this data help create a more realistic view of work-as-done and generate insights into what potential medication error reducing interventions should look to avoid and what they could help facilitate.


Author(s):  
David J. Gladstone ◽  
M. Patrice Lindsay ◽  
James Douketis ◽  
Eric E Smith ◽  
Dar Dowlatshahi ◽  
...  

Author(s):  
Elizabeth M. Perpetua ◽  
Kimberly A. Guibone ◽  
Patricia A. Keegan ◽  
Roseanne Palmer ◽  
Martina K. Speight ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document