bayesian evidence
Recently Published Documents


TOTAL DOCUMENTS

165
(FIVE YEARS 55)

H-INDEX

22
(FIVE YEARS 6)

2021 ◽  
Vol 923 (1) ◽  
pp. 120
Author(s):  
Fu-Heng Liang ◽  
Cheng Li ◽  
Niu Li ◽  
Shuang Zhou ◽  
Renbin Yan ◽  
...  

Abstract As hosts of living high-mass stars, Wolf-Rayet (WR) regions or WR galaxies are ideal objects for constraining the high-mass end of the stellar initial mass function (IMF). We construct a large sample of 910 WR galaxies/regions that cover a wide range of stellar metallicity (from Z ∼ 0.001 to 0.03) by combining three catalogs of WR galaxies/regions previously selected from the SDSS and SDSS-IV/MaNGA surveys. We measure the equivalent widths of the WR blue bump at ∼4650 Å for each spectrum. They are compared with predictions from stellar evolutionary models Starburst99 and BPASS, with different IMF assumptions (high-mass slope α of the IMF ranging from 1.0 to 3.3). Both singular evolution and binary evolution are considered. We also use a Bayesian inference code to perform full spectral fitting to WR spectra with stellar population spectra from BPASS as fitting templates. We then make a model selection among different α assumptions based on Bayesian evidence. These analyses have consistently led to a positive correlation of the IMF high-mass slope α with stellar metallicity Z, i.e., with a steeper IMF (more bottom-heavy) at higher metallicities. Specifically, an IMF with α = 1.00 is preferred at the lowest metallicity (Z ∼ 0.001), and an Salpeter or even steeper IMF is preferred at the highest metallicity (Z ∼ 0.03). These conclusions hold even when binary population models are adopted.


2021 ◽  
Author(s):  
Jeroen P Jansen

Abstract Background: Distributional cost-effectiveness analysis (DCEA) has been introduced as an extension of conventional cost-effectiveness analysis to quantify health equity impacts. Although health disparities are recognized as an important concern, the typical analyses conducted to inform health technology assessment of a new intervention do not include a formal health equity impact evaluation or DCEA. One of the reasons is that the clinical trials for new interventions frequently do not have the power or are not designed to estimate the required treatment effects for sub-populations across which you want to analyze equity. The objective of the paper is to discuss how gaps in evidence regarding equity-relevant subgroup effects for new and existing interventions can potentially be overcome with advanced Bayesian evidence synthesis methods to facilitate a credible model-based DCEA. Methods: First, the evidence needs and challenges for a model-based DCEA are outlined. Next, alternative evidence synthesis methods will be summarized, followed by an illustrative example of implementing these methods. The paper will conclude with some practical recommendations. Results: The key evidence challenges for a DCEA relate to estimating relative treatment effects due to lack of inclusion of relevant subgroups in the randomized controlled trials (RCTs), lack of access to individual patient data (IPD) for all trials, small subgroups resulting in uncertain effects, and reporting gaps. Advanced Bayesian evidence synthesis methods can help overcome evidence gaps by considering all relevant direct, indirect, and external evidence simultaneously. Methods discussed include (network) meta-analysis with shrinkage estimation, conventional (network) meta-regression analysis, multi-level (network) meta-regression analysis, and generalized evidence synthesis. For a new intervention for which only RCT evidence is available and no real-world data, estimates can be improved if the assumption of exchangeable subgroup effects or the shared or exchangeable effect-modifier assumption among competing interventions can be defended. Furthermore, formal expert elicitation is worthwhile to improve estimates. Conclusion: This paper provides an overview of advanced evidence synthesis methods that may help overcome typical gaps in the evidence base to perform model-based DCEA along with some practical recommendations. Future simulation studies are needed to assess the pros and cons of different methods for different data gap scenarios.


Author(s):  
Riko Kelter

AbstractHypothesis testing is a central statistical method in psychology and the cognitive sciences. However, the problems of null hypothesis significance testing (NHST) and p values have been debated widely, but few attractive alternatives exist. This article introduces the R package, which implements the Full Bayesian Significance Test (FBST) to test a sharp null hypothesis against its alternative via the e value. The statistical theory of the FBST has been introduced more than two decades ago and since then the FBST has shown to be a Bayesian alternative to NHST and p values with both theoretical and practical highly appealing properties. The algorithm provided in the package is applicable to any Bayesian model as long as the posterior distribution can be obtained at least numerically. The core function of the package provides the Bayesian evidence against the null hypothesis, the e value. Additionally, p values based on asymptotic arguments can be computed and rich visualizations for communication and interpretation of the results can be produced. Three examples of frequently used statistical procedures in the cognitive sciences are given in this paper, which demonstrate how to apply the FBST in practice using the package. Based on the success of the FBST in statistical science, the package should be of interest to a broad range of researchers and hopefully will encourage researchers to consider the FBST as a possible alternative when conducting hypothesis tests of a sharp null hypothesis.


Author(s):  
Riko Kelter

AbstractThe Full Bayesian Significance Test (FBST) and the Bayesian evidence value recently have received increasing attention across a variety of sciences including psychology. Ly and Wagenmakers (2021) have provided a critical evaluation of the method and concluded that it suffers from four problems which are mostly attributed to the asymptotic relationship of the Bayesian evidence value to the frequentist p-value. While Ly and Wagenmakers (2021) tackle an important question about the best way of statistical hypothesis testing in the cognitive sciences, it is shown in this paper that their arguments are based on a specific measure-theoretic premise. The identified problems hold only under a specific class of prior distributions which are required only when adopting a Bayes factor test. However, the FBST explicitly avoids this premise, which resolves the problems in practical data analysis. In summary, the analysis leads to the more important question whether precise point null hypotheses are realistic for scientific research, and a shift towards the Hodges-Lehmann paradigm may be an appealing solution when there is doubt on the appropriateness of a precise hypothesis.


2021 ◽  
Vol 103 (12) ◽  
Author(s):  
L. T. Hergt ◽  
W. J. Handley ◽  
M. P. Hobson ◽  
A. N. Lasenby

2021 ◽  
Vol 2021 (6) ◽  
Author(s):  
Junjie Cao ◽  
Demin Li ◽  
Jingwei Lian ◽  
Yuanfang Yue ◽  
Haijing Zhou

Abstract The general Next-to-Minimal Supersymmetric Standard Model (NMSSM) describes the singlino-dominated dark-matter (DM) property by four independent parameters: singlet-doublet Higgs coupling coefficient λ, Higgsino mass μtot, DM mass $$ {m}_{{\tilde{\chi}}_1^0} $$ m χ ˜ 1 0 , and singlet Higgs self-coupling coefficient κ. The first three parameters strongly influence the DM-nucleon scattering rate, while κ usually affects the scattering only slightly. This characteristic implies that singlet-dominated particles may form a secluded DM sector. Under such a theoretical structure, the DM achieves the correct abundance by annihilating into a pair of singlet-dominated Higgs bosons by adjusting κ’s value. Its scattering with nucleons is suppressed when λv/μtot is small. This speculation is verified by sophisticated scanning of the theory’s parameter space with various experiment constraints considered. In addition, the Bayesian evidence of the general NMSSM and that of Z3-NMSSM is computed. It is found that, at the cost of introducing one additional parameter, the former is approximately 3.3 × 103 times the latter. This result corresponds to Jeffrey’s scale of 8.05 and implies that the considered experiments strongly prefer the general NMSSM to the Z3-NMSSM.


Sign in / Sign up

Export Citation Format

Share Document