optional stopping
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 18)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Hannah Tickle ◽  
Konstantinos Tsetsos ◽  
Maarten Speekenbrink ◽  
Christopher Summerfield
Keyword(s):  

Author(s):  
Jorge N. Tendeiro ◽  
Henk A. L. Kiers ◽  
Don van Ravenzwaaij
Keyword(s):  

2021 ◽  
Vol 4 ◽  
Author(s):  
Markus Loecher

The connection between optimal stopping times of American Options and multi-armed bandits is the subject of active research. This article investigates the effects of optional stopping in a particular class of multi-armed bandit experiments, which randomly allocates observations to arms proportional to the Bayesian posterior probability that each arm is optimal (Thompson sampling). The interplay between optional stopping and prior mismatch is examined. We propose a novel partitioning of regret into peri/post testing. We further show a strong dependence of the parameters of interest on the assumed prior probability density.


2021 ◽  
Author(s):  
James Elsey

Producing compelling and trustworthy results relies upon performing well-powered studies with low rates of misleading evidence. Yet, resources are limited, and maximum sample sizes required to achieve acceptable power in typical fixed N designs may be disconcerting. ‘Sequential’, ‘optional stopping’, or ‘interim’ designs – in which results may be checked at interim periods and a decision made as to whether to continue data collection or not – provide one means by which researchers may be able to achieve high power and low false positive rates with less of a resource burden. Sequential analyses have received considerable attention from both frequentist and Bayesian hypothesis testing approaches, but fewer approachable resources are available for those wishing to use Bayesian estimation. In this tutorial, we cover a general process for performing power analyses of fixed and sequential designs using Bayesian estimation – simulating data, performing regressions in parallel to reduce time requirements, choosing different stopping criteria and data collection sequences, and calculating observed power and rates of misleading evidence. We conclude with a discussion of some limitations and possible extensions of the presented approach.


2021 ◽  
Author(s):  
Zoltan Dienes

Bayes factors are a useful tool for researchers in the behavioural and social sciences, partly because they can provide evidence for no effect relative to the sort of effect expected. By contrast, a non-significant result does not provide evidence for the H0 tested. So, if non-significance does not in itself count against any theory predicting an effect, how could a theory fail a test? Bayes factors provide a measure of evidence from first principles. A severe test is one that is likely to obtain evidence against a theory if it were false; that is, to obtain an extreme Bayes factor against the theory. Bayes factors show why hacking and cherry picking degrade evidence; how to deal with multiple testing situations; and how optional stopping is consistent with severe testing. Further, informed Bayes factors can be used to link theory tightly to how that theory is tested, so that the measured evidence does relate to the theory.


2021 ◽  
Vol 6 (4) ◽  
pp. 369
Author(s):  
Mingshang Hu ◽  
Shige Peng

<p style='text-indent:20px;'>In this paper, we extend the definition of conditional <inline-formula> <tex-math id="M2">\begin{document}$ G{\text{-}}{\rm{expectation}} $\end{document}</tex-math> </inline-formula> to a larger space on which the dynamical consistency still holds. We can consistently define, by taking the limit, the conditional <inline-formula> <tex-math id="M3">\begin{document}$ G{\text{-}}{\rm{expectation}} $\end{document}</tex-math> </inline-formula> for each random variable <inline-formula> <tex-math id="M4">\begin{document}$ X $\end{document}</tex-math> </inline-formula>, which is the downward limit (respectively, upward limit) of a monotone sequence <inline-formula> <tex-math id="M5">\begin{document}$ \{X_{i}\} $\end{document}</tex-math> </inline-formula> in <inline-formula> <tex-math id="M6">\begin{document}$ L_{G}^{1}(\Omega) $\end{document}</tex-math> </inline-formula>. To accomplish this procedure, some careful analysis is needed. Moreover, we present a suitable definition of stopping times and obtain the optional stopping theorem. We also provide some basic and interesting properties for the extended conditional <inline-formula> <tex-math id="M7">\begin{document}$ G{\text{-}}{\rm{expectation}} $\end{document}</tex-math> </inline-formula>. </p>


2021 ◽  
pp. 135-149
Author(s):  
Rabi Bhattacharya ◽  
Edward C. Waymire
Keyword(s):  

Author(s):  
Rianne de Heide ◽  
Peter D. Grünwald

AbstractRecently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder (Psychonomic Bulletin & Review21(2), 301–308, 2014) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do (Wagenmakers, Wetzels, Borsboom, van der Maas & Kievit, Perspectives on Psychological Science7, 627–633, 2012). This article addresses the question of whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder’s (Psychonomic Bulletin & Review21(2), 301–308, 2014) experiments, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors—which means, in most practical applications of Bayes factor hypothesis testing—resilience to optional stopping can break down. We distinguish between three types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (type 0 priors) to quite severe (type II priors).


Sign in / Sign up

Export Citation Format

Share Document