scholarly journals A fully automated, transparent, reproducible, and blind protocol for sequential analyses

2018 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.

2021 ◽  
Vol 5 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological, and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.


2019 ◽  
Author(s):  
Eduard Klapwijk ◽  
Wouter van den Bos ◽  
Christian K. Tamnes ◽  
Nora Maria Raschle ◽  
Kathryn L. Mills

Many workflows and tools that aim to increase the reproducibility and replicability of research findings have been suggested. In this review, we discuss the opportunities that these efforts offer for the field of developmental cognitive neuroscience, in particular developmental neuroimaging. We focus on issues broadly related to statistical power and to flexibility and transparency in data analyses. Critical considerations relating to statistical power include challenges in recruitment and testing of young populations, how to increase the value of studies with small samples, and the opportunities and challenges related to working with large-scale datasets. Developmental studies involve challenges such as choices about age groupings, lifespan modelling, analyses of longitudinal changes, and data that can be processed and analyzed in a multitude of ways. Flexibility in data acquisition, analyses and description may thereby greatly impact results. We discuss methods for improving transparency in developmental neuroimaging, and how preregistration can improve methodological rigor. While outlining challenges and issues that may arise before, during, and after data collection, solutions and resources are highlighted aiding to overcome some of these. Since the number of useful tools and techniques is ever-growing, we highlight the fact that many practices can be implemented stepwise.


2018 ◽  
Author(s):  
Andres Montealegre ◽  
William Jimenez-Leal

According to the social heuristics hypothesis, people intuitively cooperate or defect depending on which behavior is beneficial in their interactions. If cooperation is beneficial, people intuitively cooperate, but if defection is beneficial, they intuitively defect. However, deliberation promotes defection. Here, we tested two novel predictions regarding the role of trust in the social heuristics hypothesis. First, whether trust promotes intuitive cooperation. Second, whether preferring to think intuitively or deliberatively moderates the effect of trust on cooperation. In addition, we examined whether deciding intuitively promotes cooperation, compared to deciding deliberatively. To evaluate these predictions, we conducted a lab study in Colombia and an online study in the United Kingdom (N = 1,066; one study was pre-registered). Unexpectedly, higher trust failed to promote intuitive cooperation, though higher trust promoted cooperation. In addition, preferring to think intuitively or deliberatively failed to moderate the effect of trust on cooperation, although preferring to think intuitively increased cooperation. Moreover, deciding intuitively failed to promote cooperation, and equivalence testing confirmed that this null result was explained by the absence of an effect, rather than a lack of statistical power (equivalence bounds: d = -0.26 and 0.26). An intuitive cooperation effect emerged when non-compliant participants were excluded, but this effect could be due to selection biases. Taken together, most results failed to support the social heuristics hypothesis. We conclude by discussing implications, future directions, and limitations. The materials, data, and code are available on the Open Science Framework (https://osf.io/939jv/).


2019 ◽  
Author(s):  
Grace Elizabeth Binion ◽  
Jack Dennis Arnal ◽  
Benjamin T. Brown ◽  
Pamela Davis-Kean ◽  
Melissa Kline

The field of psychology has increased focus on factors which influence the robustness and replicability of psychological research, illuminating practices which individual investigators might adopt to improve the credibility of their research. These practices include the pre-registration of study design and analytic plans, sharing of study materials, sharing study data, and the circulation of preprints. In service of facilitating adoption of these practices, several tools have been developed to support them. These tools, however, were largely developed by and for investigators in areas of psychology which do not share the same concerns and constraints as developmental scientists, including longitudinal data collection and data collection with sensitive populations. Given this, features of these tools which accommodate more complex study design, revision, and protections where appropriate are poorly advertised. Further, there exists little formalized instruction in the use of these tools and thus their functionality remains poorly understood outside of niche groups. As a result, many developmentalists may view these tools as unapproachable and may see them as a barrier to adopting more transparent, robust practices. This workshop will provide brief tutorials in the use of these tools in the context of developmental research. Specifically, this workshop will address use of the Open Science Framework and AsPredicted.org for pre-registration of longitudinal designs (including steps for modification and revision of pre-registrations), use of the Open Science Framework to share study materials, use of data repositories to share data with protections, and use of PsyArXiv to solicit feedback on preprints, discover unpublished literature, and share existing published works that may be otherwise protected by a paywall.


Author(s):  
Wikan Danar Sunindyo ◽  
Thomas Moser ◽  
Dietmar Winkler ◽  
Stefan Biffl

Stakeholders in Open Source Software (OSS) projects need to determine whether a project is likely to sustain for a sufficient period of time in order to justify their investments into this project. In an OSS project context, there are typically several data sources and OSS processes relevant for determining project health indicators. However, even within one project these data sources often are technically and/or semantically heterogeneous, which makes data collection and analysis tedious and error prone. In this paper, the authors propose and evaluate a framework for OSS data analysis (FOSSDA), which enables the efficient collection, integration, and analysis of data from heterogeneous sources. Major results of the empirical studies are: (a) the framework is useful for integrating data from heterogeneous data sources effectively and (b) project health indicators based on integrated data analyses were found to be more accurate than analyses based on individual non-integrated data sources.


2017 ◽  
Author(s):  
Daniel Lakens

Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data is collected. Additional flexibility is provided by adaptive designs where sample sizes are increased based on the observed effect size. The need for pre-registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and NHST are discussed. Sequential analyses, which are widely used in large scale medical trials, provide an efficient way to perform high-powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research.


2021 ◽  
Author(s):  
Daniel Richard Isbell

Collecting and analyzing data can be arduous, time-consuming labor. Our first instincts might not be to give the data away and reveal the steps behind the ‘magic’ of analyses. Nonetheless, sharing data and analysis steps increases the credibility and utility of our work, and ultimately contributes to a more efficient, cumulative science. Of course, recognizing the value of data and analysis sharing is one thing – actually doing the sharing is another. Sharing data and analyses is fraught with uncertainties (e.g., What should I share? What can I share? Will my data spreadsheet and analysis script even make sense to someone else?) and, at the end of the day, amounts to additional tasks to be completed. This chapter goes beyond persuading readers to share and presents answers to common questions, advice for best practices, and practical steps for sharing that can be integrated into your research workflow. Easy-to-use, free resources like R, RStudio, and the Open Science Framework are introduced for implementing recommended practices.


2021 ◽  
Vol 2 (2) ◽  
pp. 63
Author(s):  
Akhmad Riandy Agusta ◽  
Ahmad Suriansyah ◽  
Punaji Setyosari

Penelitian ini menjabarkan tentang efektivitas model blended learning GAWI MANUNTUNG untuk meningkatkan keterampilan berpikir tingkat tinggi. Penelitian ini menggunakan kombinasi metode  penelitian yang terdiri dari metode penelitian pengembangan yang dikembangkan Borg and Gall dan metode penelitian eksperimen. Data hasil penelitian dianalisis menggunakan sequential data analysis untuk mengetahui kelayakan model, untuk menganalisis keefektifan model terhadap variabel terikat melalui uji Two sample t-Test dan uji N-gain. Sampel penelitian adalah siswa SDN Karang Mekar 1 Banjarmasin berjumlah 40 orang. Hasil penelitian diperoleh (1) langkah model GAWI MANUNTUNG meliputi: Group, Analysis and observation, Wondering observation result, Intensive data collection, Making experiment on outdoor, Analysis the result by Negotiation of solution, Using Technology, Necessity intelligences development, Task Product Creation, Unity on presentation and role play, Network Tournament and Games dengan skor validasi 4,43 dan persentase validitas 87,19% yang bermakna model GAWI MANUNTUNG memenuhi kriteria valid, reliabel, dan layak untuk diimplementasikan, (2) Setelah enam kali pertemuan, model GAWI MANUNTUNG mampu meningkatkan meningkatkan keterampilan berpikir kritis sebesar 100%, berpikir kreatif sebesar 90%, memecahkan masalah  sebesar 90%, berpikir logis sebesar 85% dan berpikir analitis sebesar 90%. Dapat disimpulkan bahwa model blended learning GAWI MANUNTUNG layak untuk digunakan dan mampu meningkatkan keterampilan berpikir tingkat tinggi siswa


2022 ◽  
Author(s):  
Margaret Moore

The purpose of this guide is to provide a detailed overview of everything researchers need to think about and do when conducting lesion symptom mapping (LSM) analysis. This guide includes step-by-step instructions for data collection, lesion delineation, lesion normalisation, LSM, secondary analyses, results interpretation, and write-up. All original scripts and analysis tools referenced in this guide are openly availible on the Open Science Framework.


2021 ◽  
Vol 31 (3) ◽  
pp. 411-416
Author(s):  
Jana Uher

Given persistent problems (e.g., replicability), psychological research is increasingly scrutinised. Arocha (2021) critically analyses epistemological problems of positivism and the common population-level statistics, which follow Galtonian instead of Wundtian nomothetic methodologies and therefore cannot explore individual-level structures and processes. Like most critics, however, he focuses on only data analyses. But the challenges of psychological data generation are still hardly explored—especially the necessity to distinguish the study phenomena from the means to explore them (e.g., concepts, terms, methods). Widespread fallacies and insufficient consideration of the epistemological, theoretical, and methodological foundations of data generation—institutionalised in psychological jargon and the popular rating scale methods—entail serious problems in data analysis that are still largely overlooked, even in most proposals for improvements.


Sign in / Sign up

Export Citation Format

Share Document