sequential hypothesis
Recently Published Documents


TOTAL DOCUMENTS

143
(FIVE YEARS 34)

H-INDEX

24
(FIVE YEARS 2)

SLEEP ◽  
2022 ◽  
Author(s):  
Mélanie Strauss ◽  
Lucie Griffon ◽  
Pascal Van Beers ◽  
Maxime Elbaz ◽  
Jason Bouziotis ◽  
...  

Abstract Sleep is known to benefit memory consolidation, but little is known about the contribution of sleep stages within the sleep cycle. The sequential hypothesis proposes that memories are first replayed during non-rapid-eye-movement (NREM or N) sleep and then integrated into existing networks during rapid-eye-movement (REM or R) sleep, two successive critical steps for memory consolidation. However, it lacks experimental evidence as N always precedes R sleep in physiological conditions. We tested this sequential hypothesis in patients with central hypersomnolence disorder, including patients with narcolepsy who present the unique, anti-physiological peculiarity of frequently falling asleep in R sleep before entering N sleep. Patients performed a visual perceptual learning task before and after daytime naps stopped after one sleep cycle, starting in N or R sleep and followed by the other stage (i.e. N-R vs. R-N sleep sequence). We compared over-nap changes in performance, reflecting memory consolidation, depending on the sleep sequence during the nap. Thirty-six patients who slept for a total of 67 naps were included in the analysis. Results show that sleep spindles are associated with memory consolidation only when N is followed by R sleep, that is in physiologically ordered N-R naps, thus providing support to the sequential hypothesis in humans. In addition, we found a negative effect of rapid-eye-movements in R sleep on perceptual consolidation, highlighting the complex role of sleep stages in the balance to remember and to forget.


2021 ◽  
Author(s):  
Jue Wang

In multiclass classification, one faces greater uncertainty when the data fall near the decision boundary. To reduce the uncertainty, one can wait and collect more data, but this invariably delays the decision. How can one make an accurate classification as quickly as possible? The solution requires a multiclass generalization of Wald’s sequential hypothesis testing, but the standard formulation is intractable because of the curse of dimensionality in dynamic programming. In “Optimal Sequential Multiclass Diagnosis,” Wang shows that, in a broad class of practical problems, the reachable state space is often restricted on, or near, a set of low-dimensional, time-dependent manifolds. After understanding the key drivers of sparsity, the author develops a new solution framework that uses a low-dimensional statistic to reconstruct the high-dimensional state. This framework circumvents the curse of dimensionality, allowing efficient computation of the optimal or near-optimal policies for quickest classification with large numbers of classes.


2021 ◽  
Author(s):  
Ramesh Johari ◽  
Pete Koomen ◽  
Leonid Pekelis ◽  
David Walsh

A/B tests are typically analyzed via frequentist p-values and confidence intervals, but these inferences are wholly unreliable if users endogenously choose samples sizes by continuously monitoring their tests. We define always valid p-values and confidence intervals that let users try to take advantage of data as fast as it becomes available, providing valid statistical inference whenever they make their decision. Always valid inference can be interpreted as a natural interface for a sequential hypothesis test, which empowers users to implement a modified test tailored to them. In particular, we show in an appropriate sense that the measures we develop trade off sample size and power efficiently, despite a lack of prior knowledge of the user’s relative preference between these two goals. We also use always valid p-values to obtain multiple hypothesis testing control in the sequential context. Our methodology has been implemented in a large-scale commercial A/B testing platform to analyze hundreds of thousands of experiments to date.


2021 ◽  
Author(s):  
Björn Haddenhorst ◽  
Viktor Bengs ◽  
Eyke Hüllermeier

AbstractThe efficiency of state-of-the-art algorithms for the dueling bandits problem is essentially due to a clever exploitation of (stochastic) transitivity properties of pairwise comparisons: If one arm is likely to beat a second one, which in turn is likely to beat a third one, then the first is also likely to beat the third one. By now, however, there is no way to test the validity of corresponding assumptions, although this would be a key prerequisite to guarantee the meaningfulness of the results produced by an algorithm. In this paper, we investigate the problem of testing different forms of stochastic transitivity in an online manner. We derive lower bounds on the expected sample complexity of any sequential hypothesis testing algorithm for various forms of stochastic transitivity, thereby providing additional motivation to focus on weak stochastic transitivity. To this end, we introduce an algorithmic framework for the dueling bandits problem, in which the statistical validity of weak stochastic transitivity can be tested, either actively or passively, based on a multiple binomial hypothesis test. Moreover, by exploiting a connection between weak stochastic transitivity and graph theory, we suggest an enhancement to further improve the efficiency of the testing algorithm. In the active setting, both variants achieve an expected sample complexity that is optimal up to a logarithmic factor.


Sign in / Sign up

Export Citation Format

Share Document