scholarly journals Opportunities for increased reproducibility and replicability of developmental neuroimaging

Author(s):  
Eduard Klapwijk ◽  
Wouter van den Bos ◽  
Christian K. Tamnes ◽  
Nora Maria Raschle ◽  
Kathryn L. Mills

Many workflows and tools that aim to increase the reproducibility and replicability of research findings have been suggested. In this review, we discuss the opportunities that these efforts offer for the field of developmental cognitive neuroscience, in particular developmental neuroimaging. We focus on issues broadly related to statistical power and to flexibility and transparency in data analyses. Critical considerations relating to statistical power include challenges in recruitment and testing of young populations, how to increase the value of studies with small samples, and the opportunities and challenges related to working with large-scale datasets. Developmental studies involve challenges such as choices about age groupings, lifespan modelling, analyses of longitudinal changes, and data that can be processed and analyzed in a multitude of ways. Flexibility in data acquisition, analyses and description may thereby greatly impact results. We discuss methods for improving transparency in developmental neuroimaging, and how preregistration can improve methodological rigor. While outlining challenges and issues that may arise before, during, and after data collection, solutions and resources are highlighted aiding to overcome some of these. Since the number of useful tools and techniques is ever-growing, we highlight the fact that many practices can be implemented stepwise.

Author(s):  
Frank A. Bosco

In some fields, research findings are rigorously curated in a common language and made available to enable future use and large-scale, robust insights. Organizational researchers have begun such efforts [e.g., metaBUS ( http://metabus.org/ )] but are far from the efficient, comprehensive curation seen in areas such as cognitive neuroscience or genetics. This review provides a sample of insights from research curation efforts in organizational research, psychology, and beyond—insights not possible by even large-scale, substantive meta-analyses. Efforts are classified as either science-of-science research or large-scale, substantive research. The various methods used for information extraction (e.g., from PDF files) and classification (e.g., using consensus ontologies) is reviewed. The review concludes with a series of recommendations for developing and leveraging the available corpus of organizational research to speed scientific progress. Expected final online publication date for the Annual Review of Organizational Psychology and Organizational Behavior, Volume 9 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2018 ◽  
Author(s):  
M. Jason de la Cruz ◽  
Michael W. Martynowycz ◽  
Johan Hattne ◽  
Tamir Gonen

AbstractWe developed a procedure for the cryoEM method MicroED using SerialEM. With this approach, SerialEM coordinates stage rotation, microscope operation, and camera functions for automated continuous-rotation MicroED data collection. More than 300 datasets can be collected overnight in this way, facilitating high-throughput MicroED data collection for large-scale data analyses.


2017 ◽  
Author(s):  
Daniel Lakens

Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data is collected. Additional flexibility is provided by adaptive designs where sample sizes are increased based on the observed effect size. The need for pre-registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and NHST are discussed. Sequential analyses, which are widely used in large scale medical trials, provide an efficient way to perform high-powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research.


2017 ◽  
Author(s):  
Rick Owen Gilmore ◽  
Michele Diaz ◽  
Brad Wyble ◽  
Tal Yarkoni

Accumulating evidence suggests that many findings in psychological science and cognitive neuroscience may prove difficult to reproduce; statistical power in brain imaging studies is low, and has not improved recently; software errors in common analysis tools are common, and can go undetected for many years; and, a few large scale studies notwithstanding, open sharing of data, code, and materials remains the rare exception. At the same time, there is a renewed focus on reproducibility, transparency, and openness as essential core values in cognitive neuroscience. The emergence and rapid growth of data archives, meta-analytic tools, software pipelines, and research groups devoted to improved methodology reflects this new sensibility. We review evidence that the field has begun to embrace new open research practices, and illustrate how these can begin to address problems of reproducibility, statistical power, and transparency in ways that will ultimately accelerate discovery.tr


2018 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.


2021 ◽  
Vol 5 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological, and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Alexander Ostrovsky ◽  
Jennifer Hillman‐Jackson ◽  
Dave Bouvier ◽  
Dave Clements ◽  
Enis Afgan ◽  
...  

2020 ◽  
Vol 11 (1) ◽  
pp. 109
Author(s):  
Jana Korytárová ◽  
Vít Hromádka

This article deals with the partial outputs of large-scale infrastructure project risk assessment, specifically in the field of road and motorway construction. The Department of Transport spends a large amount of funds on project preparation and implementation, which however, must be allocated effectively, and with knowledge of the risks that may accompany them. Therefore, documentation for decision-making on project financing also includes their analysis. This article monitors the frequency of occurrence of individual risk factors within the qualitative risk analysis, with the support of the national risk register, and identifies dependent variables that represent part of the economic cash flows for determining project economic efficiency. At the same time, it compares these dependent variables identified by sensitivity analysis with critical variables, followed by testing the interaction of the critical variables’ effect on the project efficiency using the Monte Carlo method. A partial section of the research was focused on the analysis of the probability distribution of input variables, especially “the investment costs” and “time savings of infrastructure users” variables. The research findings conclude that it is necessary to pay attention to the setting of statistical characteristics of variables entering the economic efficiency indicator calculations, as the decision of whether or not to accept projects for funding is based on them.


2021 ◽  
pp. 1-8
Author(s):  
Norin Ahmed ◽  
Jessica K. Bone ◽  
Gemma Lewis ◽  
Nick Freemantle ◽  
Catherine J. Harmer ◽  
...  

Abstract Background According to the cognitive neuropsychological model, antidepressants reduce symptoms of depression and anxiety by increasing positive relative to negative information processing. Most studies of whether antidepressants alter emotional processing use small samples of healthy individuals, which lead to low statistical power and selection bias and are difficult to generalise to clinical practice. We tested whether the selective serotonin reuptake inhibitor (SSRI) sertraline altered recall of positive and negative information in a large randomised controlled trial (RCT) of patients with depressive symptoms recruited from primary care. Methods The PANDA trial was a pragmatic multicentre double-blind RCT comparing sertraline with placebo. Memory for personality descriptors was tested at baseline and 2 and 6 weeks after randomisation using a computerised emotional categorisation task followed by a free recall. We measured the number of positive and negative words correctly recalled (hits). Poisson mixed models were used to analyse longitudinal associations between treatment allocation and hits. Results A total of 576 participants (88% of those randomised) completed the recall task at 2 and 6 weeks. We found no evidence that positive or negative hits differed according to treatment allocation at 2 or 6 weeks (adjusted positive hits ratio = 0.97, 95% CI 0.90–1.05, p = 0.52; adjusted negative hits ratio = 0.99, 95% CI 0.90–1.08, p = 0.76). Conclusions In the largest individual placebo-controlled trial of an antidepressant not funded by the pharmaceutical industry, we found no evidence that sertraline altered positive or negative recall early in treatment. These findings challenge some assumptions of the cognitive neuropsychological model of antidepressant action.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


Sign in / Sign up

Export Citation Format

Share Document