A tradition that goes back to Karl R. Popper assesses the value of a statistical test primarily by its severity: was it a honest and stringent attempt to prove the theory wrong? For "error statisticians" such as Deborah Mayo (1996, 2018), and frequentists more generally, severity is a key virtue in hypothesis tests. Conversely, failure to incorporate severity into statistical inference, as it allegedly happens in Bayesian inference, counts as a major methodological shortcoming. Our paper pursues a double goal: First, we argue that the error-statistical explication of severity has substantive drawbacks (i.e., neglect of research context; lack of connection to specificity of predictions; problematic similarity of degrees of severity to one-sided p-values). Second, we argue that severity matters for Bayesian inference via the value of specific, risky predictions: severity boosts the expected evidential value of a Bayesian hypothesis test. We illustrate severity-based reasoning in Bayesian statistics by means of a practical example and discuss its advantages and potential drawbacks.