transparent reporting
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 40)

H-INDEX

17
(FIVE YEARS 3)

2022 ◽  
pp. 254-274
Author(s):  
Katie Sievers ◽  
Debbie Smith

Learners are increasingly turning to “alternative” education channels to up- and re-skill over the course of their lifetimes. In turn, the credentialing market is expanding quickly to supplement or replace training available through traditional pathways. Yet, the relationship between emerging credentials and learners' career outcomes is underexplored. To allow credentials to pave more inclusive pathways to professions, organizations that issue credentials need to gather data about career outcomes, leverage those data to enhance their program, and promote the outcomes transparently. This chapter explores three approaches to reporting outcomes and provides actionable recommendations to implement transparent reporting strategies. If implemented, the suggested approaches could ultimately help enhance understanding of, trust in, and economic support for alternative credentials.


Author(s):  
Malou E. Gelderblom ◽  
Kelly Y.R. Stevens ◽  
Saskia Houterman ◽  
Steven Weyers ◽  
Benedictus C. Schoot

Author(s):  
Maria Kostromitina ◽  
Luke Plonsky

Abstract Elicited imitation tasks (EITs) have been proposed and examined as a practical measure of second language (L2) proficiency. This study aimed to provide an updated and comprehensive view of the relationship between EITs and other proficiency measures. Toward that end, 46 reports were retrieved contributing 60 independent effect sizes (Pearson’s r) that were weighted and averaged. Several EIT features were also examined as potential moderators. The results portray EIT as a generally consistent measure of L2 proficiency (r = .66). Among other moderators, EIT stimuli length was positively associated with stronger correlations. Overall, the findings provide support for the use of EITs as a means to greater consistency and practicality in measuring L2 proficiency. In our Discussion section, we highlight the need for more transparent reporting and provide empirically grounded recommendations for EIT design and for further research into EIT development.


2021 ◽  
Vol 49 (8) ◽  
pp. 1361-1362
Author(s):  
David J. Wallace ◽  
Lori Shutter ◽  
Naudia Jonassaint

2021 ◽  

In this podcast we speak to Professor Henrik Larsson, Professor of Psychiatric Epidemiology at Orebro University and Karolinska Institute in Sweden, and Editor in Chief of ACAMH's new journal, JCPP Advances.


2021 ◽  
Author(s):  
Timothy John Luke ◽  
Karl Ask ◽  
Ebba Magnusson ◽  
Sofia Calderon ◽  
Erik Mac Giolla

Amit et al. (2013) concluded that social distance can influence communication preferences: People prefer communicating with closer others using pictures (which are more concrete) and more distant others using words (which are more abstract). We conducted a high-powered (N = 988) preregistered replication of Amit et al. (2013, Experiment 2) and extended the design by manipulating the presence of a potential confound we detected when examining the original instructions. The original effect successfully replicated using the original instructions but did not replicate after the removal of the confound. Moreover, we demonstrate that the effect obtained with the original instructions likely relies on a different mechanism (comfort with sending personal pictures to close and distant contacts) than that posited in the original study (preference for concrete and abstract communication). These results cast doubt on the original interpretation and highlight the importance of transparent reporting standards in research.


2021 ◽  
Vol 118 (17) ◽  
pp. e2103238118
Author(s):  
Malcolm Macleod ◽  
Andrew M. Collings ◽  
Chris Graf ◽  
Veronique Kiermer ◽  
David Mellor ◽  
...  

Author(s):  
Reza Norouzian

Abstract There has recently been a surge of interest in improving the replicability of second language (L2) research. However, less attention is paid to replicability in the context of L2 meta-analyses. I argue that conducting interrater reliability (IRR) analyses is a key step toward improving the replicability of L2 meta-analyses. To that end, I first discuss the foundation of IRR in the context of meta-analytic research. Second, I introduce two IRR measures, S index and Specific Agreement, which aid in improving the replicability of meta-analytic research. Third, I offer a flexible R program, meta_rate, to facilitate the conduct of IRR analyses for L2 meta-analyses. Fourth, I apply the R program to an actual meta-analytic L2 coding sheet to demonstrate the practical use of the IRR methods discussed. Finally, I provide interpretive guidelines to assist both L2 meta-analysts and journals with the transparent reporting of the IRR findings.


Sign in / Sign up

Export Citation Format

Share Document