rejection method
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 28)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Yun Rao ◽  
Yuren Li ◽  
Bo Liang ◽  
Hongyu Zhang ◽  
Yufan Zhang ◽  
...  

Author(s):  
Federico D’Ambrosio ◽  
Hans L. Bodlaender ◽  
Gerard T. Barkema

AbstractIn this paper, we consider several efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance–Rejection method, the Complete Binary Tree and the Alias method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used, with the top level selecting a group of events, and the bottom level selecting an element from a group. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant when the rate distribution follows certain conditions. We show that for any distribution, combining a tree structure with the Acceptance–Rejection method, we have an expected time of sampling and update of $$O\left( \log \log {r_{max}}/{r_{min}}\right) $$ O log log r max / r min is possible, where $$r_{max}$$ r max is the maximum rate and $$r_{min}$$ r min the minimum rate. We also discuss an implementation of a Two Levels Acceptance–Rejection data structure, that allows expected constant time for sampling, and amortized constant time for updates, assuming that $$r_{max}$$ r max and $$r_{min}$$ r min are known and the number of events is sufficiently large. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting.


2021 ◽  
Author(s):  
Benjamin Denham ◽  
Russel Pears ◽  
M. Asif Naeem

Datasets containing class noise present significant challenges to accurate classification, thus requiring classifiers that can refuse to classify noisy instances. We demonstrate the inability of the popular confidence-thresholding rejection method to learn from relationships between input features and not-at-random class noise. To take advantage of these relationships, we propose a novel null-labelling scheme based on iterative re-training with relabelled datasets that uses a classifier to learn to reject instances that are likely to be misclassified. We demonstrate the ability of null-labelling to achieve a significantly better tradeoff between classification error and coverage than the confidence-thresholding method. Models generated by the null-labelling scheme have the added advantage of interpretability, in that they are able to identify features correlated with class noise. We also unify prior theories for combining and evaluating sets of rejecting classifiers.


2021 ◽  
Vol 21 (2) ◽  
pp. 1686-1693
Author(s):  
Zongqi Ning ◽  
Yao Mao ◽  
Yongmei Huang ◽  
Zhou Xi ◽  
Chao Zhang

Sign in / Sign up

Export Citation Format

Share Document