Young children's sleep difficulties: Are mainstream research findings supported by more recent evidence?

2021 ◽  
Vol 9 (8) ◽  
pp. 334-339
Author(s):  
Carole Sutton

A substantial body of evidence about how to support parents with their child's sleep difficulties has been published and this can inform the practice of health visitors and others who work with the families concerned. However, does more recently published research in this field support or question the findings of mainstream studies? This article will examine how some recent studies do support and develop earlier evidence, while others open new fields of research, or challenge official guidance in new ways.

2019 ◽  
Vol 18 (1) ◽  
pp. 1
Author(s):  
Antonio Marcos Andrade

Em 2005, o grego John Loannidis, professor da Universidade de Stanford, publicou um artigo na PLOS Medicine intitulado “Why most published research findings are false” [1]. Ele que é dos pioneiros da chamada “meta-ciência”, disciplina que analisa o trabalho de outros cientistas, avaliou se estão respeitando as regras fundamentais que definem a boa ciência. Esse trabalho foi visto com muito espanto e indignação por parte dos pesquisadores na época, pois colocava em xeque a credibilidade da ciência.Para muitos cientistas, isso acontece porque a forma de se produzir conhecimento ficou diferente, ao ponto que seria quase irreconhecível para os grandes gênios dos séculos passados. Antigamente, se analisavam os dados em estado bruto, os autores iam às academias reproduzir suas experiências diante de todos, mas agora isso se perdeu porque os estudos são baseados em seis milhões de folhas de dados. Outra questão importante que garantia a confiabilidade dos achados era que os cientistas, independentemente de suas titulações e da relevância de suas descobertas anteriores, tinham que demonstrar seus novos achados diante de seus pares que, por sua vez, as replicavam em seus laboratórios antes de dar credibilidade à nova descoberta. Contudo, na atualidade, essas garantias veem sendo esquecidas e com isso colocando em xeque a validade de muitos estudos na área de saúde.Preocupados com a baixa qualidade dos trabalhos atuais, um grupo de pesquisadores se reuniram em 2017 e construíram um documento manifesto que acabou de ser publicado no British Medical Journal “Evidence Based Medicine Manifesto for Better Health Care” [2]. O Documento é uma iniciativa para a melhoria da qualidade das evidências em saúde. Nele se discute as possíveis causas da pouca confiabilidade científica e são apresentadas algumas alternativas para a correção do atual cenário. Segundo seus autores, os problemas estão presentes nas diferentes fases da pesquisa:Fases da elaboração dos objetivos - Objetivos inúteis. Muito do que é produzido não tem impacto científico nem clínico. Isso porque os pesquisadores estão mais interessados em produzir um número grande de artigos do que gerar conhecimento. Quase 85% dos trabalhos não geram nenhum benefício direto a humanidade.Fase do delineamento do estudo - Estudos com amostras subdimensionados, que não previnem erros aleatórios. Métodos que não previnem erros sistemáticos (viés na escolha das amostras, falta de randomização correta, viés de confusão, desfechos muito abertos). Em torno de 35% dos pesquisadores assumem terem construídos seus métodos de maneira enviesada.Fase de análise dos dados - Trinta e cinco por cento dos pesquisadores assumem práticas inadequadas no momento de análise dos dados. Muitos assumem que durante esse processo realizam várias análises simultaneamente, e as que apresentam significância estatística são transformadas em objetivos no trabalho. As revistas também têm sua parcela de culpa nesse processo já que os trabalhos com resultados positivos são mais aceitos (2x mais) que trabalhos com resultados negativos.Fase de revisão do trabalho - Muitos revisores de saúde não foram treinados para reconhecer potenciais erros sistemáticos e aleatórios nos trabalhos.Em suma é necessário que pesquisadores e revistas científicas pensem nisso. Só assim, teremos evidências de maior qualidade, estimativas estatísticas adequadas, pensamento crítico e analítico desenvolvido e prevenção dos mais comuns vieses cognitivos do pensamento.


Author(s):  
Petah Atkinson ◽  
Marilyn Baird ◽  
Karen Adams

Yarning as a research method has its grounding as an Aboriginal culturally specified process. Significant to the Research Yarn is relationality, however; this is a missing feature of published research findings. This article aims to address this. The research question was, what can an analysis of Social and Family Yarning tell us about relationality that underpins a Research Yarn. Participant recruitment occurred using convenience sampling, and data collection involved Yarning method. Five steps of data analysis occurred featuring Collaborative Yarning and Mapping. Commonality existed between researcher and participants through predominantly experiences of being a part of Aboriginal community, via Aboriginal organisations and Country. This suggests shared explicit and tacit knowledge and generation of thick data. Researchers should report on their experience with Yarning, the types of Yarning they are using, and the relationality generated from the Social, Family and Research Yarn.


CHANCE ◽  
2005 ◽  
Vol 18 (4) ◽  
pp. 40-47 ◽  
Author(s):  
John P. A. Ioannidis

2015 ◽  
Vol 12 (4) ◽  
pp. 445-446 ◽  
Author(s):  
Ding Ding ◽  
Klaus Gebel ◽  
Becky Freeman ◽  
Adrian E. Bauman

Media reporting of published research findings can increase the profile and reach of new scientific findings. Dissemination is an important part of research, and media reporting can catalyze this process. In many areas, including health-related research, policymakers often rely on the media for information and guidance. Furthermore, media reports can influence the scientific community and clinicians.1·2 However, despite the potential beneficial role as a bridge between scientists and the public, misleading information can cause controversy, confusion, and even harm.3


2020 ◽  
Author(s):  
Mark Rubin

Preregistration entails researchers registering their planned research hypotheses, methods, and analyses in a time-stamped document before they undertake their data collection and analyses. This document is then made available with the published research report to allow readers to identify discrepancies between what the researchers originally planned to do and what they actually ended up doing. This historical transparency is supposed to facilitate judgments about the credibility of the research findings. The present article provides a critical review of 17 of the reasons behind this argument. The article covers issues such as HARKing, multiple testing, p-hacking, forking paths, optional stopping, researchers’ biases, selective reporting, test severity, publication bias, and replication rates. It is concluded that preregistration’s historical transparency does not facilitate judgments about the credibility of research findings when researchers provide contemporary transparency in the form of (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.


2020 ◽  
Vol 44 (1-2) ◽  
pp. 1-2
Author(s):  
Harrison Dekker ◽  
Amy Riegelman

As guest editors, we are excited to publish this special double issue of IASSIST Quarterly. The topics of reproducibility, replicability, and transparency have been addressed in past issues of IASSIST Quarterly and at the IASSIST conference, but this double issue is entirely focused on these issues. In recent years, efforts “to improve the credibility of science by advancing transparency, reproducibility, rigor, and ethics in research” have gained momentum in the social sciences (Center for Effective Global Action, 2020). While few question the spirit of the reproducibility and research transparency movement, it faces significant challenges because it goes against the grain of established practice. We believe the data services community is in a unique position to help advance this movement given our data and technical expertise, training and consulting work, international scope, and established role in data management and preservation, and more. As evidence of the movement, several initiatives exist to support research reproducibility infrastructure and data preservation efforts: Center for Open Science (COS) / Open Science Framework (OSF)[i] Berkeley Initiative for Transparency in the Social Sciences (BITSS)[ii] CUrating for REproducibility (CURE)[iii] Project Tier[iv] Data Curation Network[v] UK Reproducibility Network[vi] While many new initiatives have launched in recent years, prior to the now commonly used phrase “reproducibility crisis” and Ioannidis publishing the essay, “Why Most Published Research Findings are False,” we know that the data services community was supporting reproducibility in a variety of ways (e.g., data management, data preservation, metadata standards) in wellestablished consortiums such as Inter-university Consortium for Political and Social Research (ICPSR) (Ioannidis, 2005). The articles in this issue comprise several very important aspects of reproducible research: Identification of barriers to reproducibility and solutions to such barriers Evidence synthesis as related to transparent reporting and reproducibility Reflection on how information professionals, researchers, and librarians perceive the reproducibility crisis and how they can partner to help solve it. The issue begins with “Reproducibility literature analysis” which looks at existing resources and literature to identify barriers to reproducibility and potential solutions. The authors have compiled a comprehensive list of resources with annotations that include definitions of key concepts pertinent to the reproducibility crisis. The next article addresses data reuse from the perspective of a large research university. The authors examine instances of both successful and failed data reuse instances and identify best practices for librarians interested in conducting research involving the common forms of data collected in an academic library. Systematic reviews are a research approach that involves the quantitative and/or qualitative synthesis of data collected through a comprehensive literature review.  “Methods reporting that supports reader confidence for systematic reviews in psychology” looks at the reproducibility of electronic literature searches reported in psychology systematic reviews. A fundamental challenge in reproducing or replicating computational results is the need for researchers to make available the code used in producing these results. But sharing code and having it to run correctly for another user can present significant technical challenges. In “Reproducibility, preservation, and access to research with Reprozip, Reproserver” the authors describe open source software that they are developing to address these challenges.  Taking a published article and attempting to reproduce the results, is an exercise that is sometimes used in academic courses to highlight the inherent difficulty of the process. The final article in this issue, “ReprohackNL 2019: How libraries can promote research reproducibility through community engagement” describes an innovative library-based variation to this exercise.   Harrison Dekker, Data Librarian, University of Rhode Island Amy Riegelman, Social Sciences Librarian, University of Minnesota   References Center for Effective Global Action (2020), About the Berkeley Initiative for Transparency in the Social Sciences. Available at: https://www.bitss.org/about (accessed 23 June 2020). Ioannidis, J.P. (2005) ‘Why most published research findings are false’, PLoS Medicine, 2(8), p. e124.  doi:  https://doi.org/10.1371/journal.pmed.0020124   [i] https://osf.io [ii] https://www.bitss.org/ [iii] http://cure.web.unc.edu [iv] https://www.projecttier.org/ [v] https://datacurationnetwork.org/ [vi] https://ukrn.org


2020 ◽  
Author(s):  
Mark Rubin

Preregistration entails researchers registering their planned research hypotheses, methods, and analyses in a time-stamped document before they undertake their data collection and analyses. This document is then made available with the published research report to allow readers to identify discrepancies between what the researchers originally planned to do and what they actually ended up doing. This historical transparency is supposed to facilitate judgments about the credibility of the research findings. The present article provides a critical review of 17 of the reasons behind this argument. The article covers issues such as HARKing, multiple testing, p-hacking, forking paths, optional stopping, researchers’ biases, selective reporting, test severity, publication bias, and replication rates. It is concluded that preregistration’s historical transparency does not facilitate judgments about the credibility of research findings when researchers provide contemporary transparency in the form of (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.


Sign in / Sign up

Export Citation Format

Share Document