scholarly journals Citation Patterns Following a Strongly Contradictory Replication Result: Four Case Studies From Psychology

2021 ◽  
Vol 4 (3) ◽  
pp. 251524592110408
Author(s):  
Tom E. Hardwicke ◽  
Dénes Szűcs ◽  
Robert T. Thibault ◽  
Sophia Crüwell ◽  
Olmo R. van den Akker ◽  
...  

Replication studies that contradict prior findings may facilitate scientific self-correction by triggering a reappraisal of the original studies; however, the research community’s response to replication results has not been studied systematically. One approach for gauging responses to replication results is to examine how they affect citations to original studies. In this study, we explored postreplication citation patterns in the context of four prominent multilaboratory replication attempts published in the field of psychology that strongly contradicted and outweighed prior findings. Generally, we observed a small postreplication decline in the number of favorable citations and a small increase in unfavorable citations. This indicates only modest corrective effects and implies considerable perpetuation of belief in the original findings. Replication results that strongly contradict an original finding do not necessarily nullify its credibility; however, one might at least expect the replication results to be acknowledged and explicitly debated in subsequent literature. By contrast, we found substantial citation bias: The majority of articles citing the original studies neglected to cite relevant replication results. Of those articles that did cite the replication but continued to cite the original study favorably, approximately half offered an explicit defense of the original study. Our findings suggest that even replication results that strongly contradict original findings do not necessarily prompt a corrective response from the research community.

2021 ◽  
Author(s):  
Tom Elis Hardwicke ◽  
Dénes Szűcs ◽  
Robert T. Thibault ◽  
Sophia Crüwell ◽  
Olmo Van den Akker ◽  
...  

Replication studies that contradict prior findings may facilitate scientific self-correction by triggering a reappraisal of the original studies; however, the research community's response to replication results has not been studied systematically. One approach for gauging responses to replication results is to examine how they impact citations to original studies. In this study, we explored post-replication citation patterns in the context of four prominent multi-laboratory replication attempts published in the field of psychology that strongly contradicted and outweighed prior findings. Generally, we observed a small post-replication decline in the number of favourable citations and a small increase in unfavourable citations. This indicates only modest corrective effects and implies considerable perpetuation of belief in the original findings. Replication results that strongly contradict an original finding do not necessarily nullify its credibility; however, one might at least expect the replication results to be acknowledged and explicitly debated in subsequent literature. By contrast, we found substantial citation bias: the majority of articles citing the original studies neglected to cite relevant replication results. Of those articles that did cite the replication, but continued to cite the original study favourably, approximately half offered an explicit defence of the original study. Our findings suggest that even replication results that strongly contradict original findings do not necessarily prompt a corrective response from the research community.


Author(s):  
Rebecca PRICE ◽  
Christine DE LILLE ◽  
Cara WRIGLEY ◽  
Kees DORST

There is an increasing need for organizations to adapt to rapid changes in society. This need requires organizations’ and the leader within them, to explore, recognize, build and exploit new capabilities. Researching such capabilities has drawn attention from the design management research community in recent years. Dominantly, research contributions have focused on perspectives of innovation and the strategic application of design with the researcher distanced from context. Descriptive and evaluative case studies of past organizational leadership have been vital, by building momentum for the design movement. However, there is a need now to progress toward prescriptive and explorative research perspectives that embrace context through practice and the simultaneous research of design.  Therefore, the aim of this track is to lead and progress discussion on research methodologies that support the research community in developing explorative and prescriptive research methodologies for context-orientated organizational research. This track brings together a group of diverse international researchers and practitioners to fuel discussion on design approaches and subsequent outcomes of prescriptive and explorative research methodologies.


2021 ◽  
Author(s):  
Neil McLatchie ◽  
Manuela Thomae

Thomae and Viki (2013) reported that increased exposure to sexist humour can increase rape proclivity among males, specifically those who score high on measures of Hostile Sexism. Here we report two pre-registered direct replications (N = 530) of Study 2 from Thomae and Viki (2013) and assess replicability via (i) statistical significance, (ii) Bayes factors, (iii) the small-telescope approach, and (iv) an internal meta-analysis across the original and replication studies. The original results were not supported by any of the approaches. Combining the original study and the replications yielded moderate evidence in support of the null over the alternative hypothesis with a Bayes factor of B = 0.13. In light of the combined evidence, we encourage researchers to exercise caution before claiming that brief exposure to sexist humour increases male’s proclivity towards rape, until further pre-registered and open research demonstrates the effect is reliably reproducible.


Author(s):  
Jan Bosch ◽  
Helena Holmström Olsson ◽  
Ivica Crnkovic

Artificial intelligence (AI) and machine learning (ML) are increasingly broadly adopted in industry. However, based on well over a dozen case studies, we have learned that deploying industry-strength, production quality ML models in systems proves to be challenging. Companies experience challenges related to data quality, design methods and processes, performance of models as well as deployment and compliance. We learned that a new, structured engineering approach is required to construct and evolve systems that contain ML/DL components. In this chapter, the authors provide a conceptualization of the typical evolution patterns that companies experience when employing ML as well as an overview of the key problems experienced by the companies that they have studied. The main contribution of the chapter is a research agenda for AI engineering that provides an overview of the key engineering challenges surrounding ML solutions and an overview of open items that need to be addressed by the research community at large.


2000 ◽  
Vol 24 (1) ◽  
pp. 380-420
Author(s):  
F. Rostas ◽  
P. L. Smith ◽  
K. A. Berrington ◽  
N. Feautrier ◽  
N. Grevesse ◽  
...  

In recognition of its special interdisciplinary character, IAU Commission 14 is linked directly to the Executive Committee. The Commission’s role is to inform the astronomical community of new developments in the diverse fields of research which involve atoms and molecules. Conversely it endeavors to sensitize the research community active in those fields to the specific needs of astronomy, especially concerning basic data and modeling tools. More generally, Commission 14 tries to foster long term relations and collaborations between the two communities and, when necessary, to alert funding authorities to the specific needs of ground and space based astronomy for specific atomic and molecular data. This report is one of the main contributions of Commission 14 to the information of the astronomical community. Several meetings concerned, at least in part, with the need and availability of atomic and molecular data for astrophysics were also sponsored or co-sponsored. In the last triennium, Commission 14 cosponsored IAU Symposium 194 “Astrochemistry: From Molecular Cloud to Planetary Systems” held in Sogwipo (Korea) from Aug. 23 to 27, 1999 and organized by Commission 34. A Joint Discussion: JD1 on “Atomic and Molecular Data for Astrophysics, New Developments, Case Studies and Future Needs” has been planned for the XXIVth IAU General Assembly in Manchester (Aug. 7-19, 2000) and cosponsored by Commissions 15, 16, 29, 34, 36, 40 and 44. Several other Joint Discussions to be held at the Manchester General Assembly are co-sponsored by this commission.


2020 ◽  
Vol 3 (3) ◽  
pp. 309-331 ◽  
Author(s):  
Charles R. Ebersole ◽  
Maya B. Mathur ◽  
Erica Baranski ◽  
Diane-Jo Bart-Plange ◽  
Nicholas R. Buttrick ◽  
...  

Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect ( p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δ r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols ( r = .05) was similar to that of the RP:P protocols ( r = .04) and the original RP:P replications ( r = .11), and smaller than that of the original studies ( r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50).


2020 ◽  
Vol 9 (1) ◽  
Author(s):  
Linda Biesty ◽  
Pauline Meskell ◽  
Claire Glenton ◽  
Hannah Delaney ◽  
Mike Smalle ◽  
...  

Abstract Background The COVID-19 pandemic has created a sense of urgency in the research community in their bid to contribute to the evidence required for healthcare policy decisions. With such urgency, researchers experience methodological challenges to maintain the rigour and transparency of their work. With this in mind, we offer reflections on our recent experience of undertaking a rapid Cochrane qualitative evidence synthesis (QES). Methods This process paper, using a reflexive approach, describes a rapid QES prepared during, and in response to, the COVID-19 pandemic. Findings This paper reports the methodological decisions we made and the process we undertook. We place our decisions in the context of guidance offered in relation to rapid reviews and previously conducted QESs. We highlight some of the challenges we encountered in finding the balance between the time needed for thoughtfulness and comprehensiveness whilst providing a rapid response to an urgent request for evidence. Conclusion The need for more guidance on rapid QES remains, but such guidance needs to be based on actual worked examples and case studies. This paper and the reflections offered may provide a useful framework for others to use and further develop.


2021 ◽  
pp. 173-190
Author(s):  
R. Barker Bausell

But what happens to investigators whose studies fails to replicate? The answer is complicated by the growing use of social media by scientists and the tenor of the original investigators’ responses to the replicators. Alternative case studies are presented including John Bargh’s vitriolic outburst following a failure of his classic word priming study to replicate, Amy Cuddy’s unfortunate experience with power posing, and Matthew Vees’s low-keyed response in which he declined to aggressively disparage his replicators, complemented the replicators’ interpretation of their replication, and neither defended his original study or even suggested that its findings might be wrong. In addition to such case studies, surveys on the subject suggest that there are normally no long-term deleterious career or reputational effects on investigators for a failure of a study to replicate and that a reasoned (or no) response to a failed replication is the superior professional and affective solution.


2018 ◽  
Vol 47 (9) ◽  
pp. 594-605 ◽  
Author(s):  
Christina S. Chhin ◽  
Katherine A. Taylor ◽  
Wendy S. Wei

Despite the important role that replication studies play in building scientific evidence, recent reports show that few replications have been conducted in education. The goal of the current study was to examine how many efficacy and effectiveness research grants funded by the Institute of Education Sciences (IES) were replications, what types of replications they represented, and whether applicants explicitly stated their intent to conduct a replication. Data showed that IES has not funded any direct replications that duplicate all aspects of the original study, but almost half of the funded grant applications can be considered conceptual replications that vary one or more dimensions of a prior study. The majority of funded grant applications did not explicitly state an intent to conduct a replication.


2019 ◽  
Vol 9 (18) ◽  
pp. 3699
Author(s):  
Guosheng Xu ◽  
Shengwei Xu ◽  
Chuan Gao ◽  
Bo Wang ◽  
Guoai Xu

Permission-related issues in Android apps have been widely studied in our research community, while most of the previous studies considered these issues from the perspective of app users. In this paper, we take a different angle to revisit the permission-related issues from the perspective of app developers. First, we perform an empirical study on investigating how we can help developers make better decisions on permission uses during app development. With detailed experimental results, we show that many permission-related issues can be identified and fixed during the application development phase. In order to help developers to identify and fix these issues, we develop PerHelper, an IDEplugin to automatically infer candidate permission sets, which help guide developers to set permissions more effectively and accurately. We integrate permission-related bug detection into PerHelper and demonstrate its applicability and flexibility through case studies on a set of open-source Android apps.


Sign in / Sign up

Export Citation Format

Share Document