scholarly journals Bayesian inference of population prevalence

2020 ◽  
Author(s):  
Robin A. A. Ince ◽  
Jim W. Kay ◽  
Philippe G. Schyns

AbstractWithin neuroscience, psychology and neuroimaging, the most frequently used statistical approach is null-hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. It delivers a quantitative estimate with associated uncertainty instead of reducing an experiment to a binary inference on a population mean. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology and neuroimaging. Its emphasis on detecting effects within individual participants could also help address replicability issues in these fields.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Robin AA Ince ◽  
Angus T Paton ◽  
Jim W Kay ◽  
Philippe G Schyns

Within neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. Bayesian prevalence delivers a quantitative population estimate with associated uncertainty instead of reducing an experiment to a binary inference. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology, and neuroimaging. Its emphasis on detecting effects within individual participants can also help address replicability issues in these fields.


2020 ◽  
Vol 7 (6) ◽  
pp. 200231 ◽  
Author(s):  
Scott W. Yanco ◽  
Andrew McDevitt ◽  
Clive N. Trueman ◽  
Laurel Hartley ◽  
Michael B. Wunder

Science provides a method to learn about the relationships between observed patterns and the processes that generate them. However, inference can be confounded when an observed pattern cannot be clearly and wholly attributed to a hypothesized process. Over-reliance on traditional single-hypothesis methods (i.e. null hypothesis significance testing) has resulted in replication crises in several disciplines, and ecology exhibits features common to these fields (e.g. low-power study designs, questionable research practices, etc.). Considering multiple working hypotheses in combination with pre-data collection modelling can be an effective means to mitigate many of these problems. We present a framework for explicitly modelling systems in which relevant processes are commonly omitted, overlooked or not considered and provide a formal workflow for a pre-data collection analysis of multiple candidate hypotheses. We advocate for and suggest ways that pre-data collection modelling can be combined with consideration of multiple working hypotheses to improve the efficiency and accuracy of research in ecology.


2021 ◽  
Author(s):  
Saki Takahashi ◽  
Michael J Peluso ◽  
Jill Hakim ◽  
Keirstinne Turcios ◽  
Owen Janson ◽  
...  

Serosurveys are a key resource for measuring SARS-CoV-2 cumulative incidence. A growing body of evidence suggests that asymptomatic and mild infections (together making up over 95% of all infections) are associated with lower antibody titers than severe infections. Antibody levels also peak a few weeks after infection and decay gradually. We developed a statistical approach to produce adjusted estimates of seroprevalence from raw serosurvey results that account for these sources of spectrum bias. We incorporate data on antibody responses on multiple assays from a post-infection longitudinal cohort, along with epidemic time series to account for the timing of a serosurvey relative to how recently individuals may have been infected. We applied this method to produce adjusted seroprevalence estimates from five large-scale SARS-CoV-2 serosurveys across different settings and study designs. We identify substantial differences between reported and adjusted estimates of over two-fold in the results of some surveys, and provide a tool for practitioners to generate adjusted estimates with pre-set or custom parameter values. While unprecedented efforts have been launched to generate SARS-CoV-2 seroprevalence estimates over this past year, interpretation of results from these studies requires properly accounting for both population-level epidemiologic context and individual-level immune dynamics.


Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 26 ◽  
Author(s):  
David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.


2013 ◽  
Vol 59 (4) ◽  
pp. 485-505 ◽  
Author(s):  
Jon E. Brommer

Abstract Individual-based studies allow quantification of phenotypic plasticity in behavioural, life-history and other labile traits. The study of phenotypic plasticity in the wild can shed new light on the ultimate objectives (1) whether plasticity itself can evolve or is constrained by its genetic architecture, and (2) whether plasticity is associated to other traits, including fitness (selection). I describe the main statistical approach for how repeated records of individuals and a description of the environment (E) allow quantification of variation in plasticity across individuals (IxE) and genotypes (GxE) in wild populations. Based on a literature review of life-history and behavioural studies on plasticity in the wild, I discuss the present state of the two objectives listed above. Few studies have quantified GxE of labile traits in wild populations, and it is likely that power to detect statistically significant GxE is lacking. Apart from the issue of whether it is heritable, plasticity tends to correlate with average trait expression (not fully supported by the few genetic estimates available) and may thus be evolutionary constrained in this way. Individual-specific estimates of plasticity tend to be related to other traits of the individual (including fitness), but these analyses may be anti-conservative because they predominantly concern stats-on-stats. Despite the increased interest in plasticity in wild populations, the putative lack of power to detect GxE in such populations hinders achieving general insights. I discuss possible steps to invigorate the field by moving away from simply testing for presence of GxE to analyses that ‘scale up’ to population level processes and by the development of new behavioural theory to identify quantitative genetic parameters which can be estimated.


2016 ◽  
Vol 11 (4) ◽  
pp. 551-554 ◽  
Author(s):  
Martin Buchheit

The first sport-science-oriented and comprehensive paper on magnitude-based inferences (MBI) was published 10 y ago in the first issue of this journal. While debate continues, MBI is today well established in sport science and in other fields, particularly clinical medicine, where practical/clinical significance often takes priority over statistical significance. In this commentary, some reasons why both academics and sport scientists should abandon null-hypothesis significance testing and embrace MBI are reviewed. Apparent limitations and future areas of research are also discussed. The following arguments are presented: P values and, in turn, study conclusions are sample-size dependent, irrespective of the size of the effect; significance does not inform on magnitude of effects, yet magnitude is what matters the most; MBI allows authors to be honest with their sample size and better acknowledge trivial effects; the examination of magnitudes per se helps provide better research questions; MBI can be applied to assess changes in individuals; MBI improves data visualization; and MBI is supported by spreadsheets freely available on the Internet. Finally, recommendations to define the smallest important effect and improve the presentation of standardized effects are presented.


2021 ◽  
Author(s):  
Валерій Боснюк

Для підтвердження результатів дослідження в психологічних наукових роботах протягом багатьох років використовується процедура перевірки значущості нульової гіпотези (загальноприйнята абревіатура NHST – Null Hypothesis Significance Testing) із застосуванням спеціальних статистичних критеріїв. При цьому здебільшого значення статистики «p» (p-value) розглядається як еквівалент важливості отриманих результатів і сили наукових доказів на користь практичного й теоретичного ефекту дослідження. Таке некоректне використання та інтерпретації p-value ставить під сумнів застосування статистики взагалі та загрожує розвитку психології як науки. Ототожнення статистичного висновку з науковим висновком, орієнтація виключно на новизну в наукових дослідженнях, ритуальна прихильність дослідників до рівня значущості 0,05, опора на статистичну категоричність «так/ні» під час прийняття рішення призводить до того, що психологія примножує тільки результати про наявність ефекту без врахування його величини, практичної цінності. Дана робота призначена для аналізу обмеженості p-value при інтерпретації результатів психологічних досліджень та переваг представлення інформації про розмір ефекту. Застосування розмірів ефекту дозволить здійснити перехід від дихотомічного мислення до оціночного, визначати цінність результатів незалежно від рівня статистичної значущості, приймати рішення більш раціонально та обґрунтовано. Обґрунтовується позиція, що автор наукової роботи при формулюванні висновків дослідження не повинен обмежуватися одним єдиним показником рівня статистичної значущості. Осмислені висновки повинні базуватися на розумному балансуванні p-value та інших не менш важливих параметрів, одним з яких виступає розмір ефекту. Ефект (відмінність, зв’язок, асоціація) може бути статистично значущим, а його практична (клінічна) цінність – незначною, тривіальною. «Статистично значущий» не означає «корисний», «важливий», «цінний», «значний». Тому звернення уваги психологів до питання аналізу виявленого розміру ефекту має стати обов’язковим при інтерпретації результатів дослідження.


Sign in / Sign up

Export Citation Format

Share Document