probabilistic thresholds
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 3)

H-INDEX

3
(FIVE YEARS 1)

2021 ◽  
Vol 70 ◽  
pp. 1373-1411
Author(s):  
Curtis Northcutt ◽  
Lu Jiang ◽  
Isaac Chuang

Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.


2019 ◽  
Vol 574 ◽  
pp. 276-287 ◽  
Author(s):  
Binru Zhao ◽  
Qiang Dai ◽  
Dawei Han ◽  
Huichao Dai ◽  
Jingqiao Mao ◽  
...  

2018 ◽  
Vol 29 (4) ◽  
pp. 748-757
Author(s):  
Alexandre Brandwajn ◽  
Thomas Begin ◽  
Hind Castel-Taleb ◽  
Tulin Atmaca

2014 ◽  
Vol 90 (3) ◽  
pp. 363-375 ◽  
Author(s):  
HOSSEIN SOLTANI ◽  
MANOUCHEHR ZAKER

AbstractLet $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}G$ be a graph and ${{\tau }}$ be an assignment of nonnegative thresholds to the vertices of $G$. A subset of vertices, $D$, is an irreversible dynamic monopoly of $(G, \tau )$ if the vertices of $G$ can be partitioned into subsets $D_0, D_1, \ldots, D_k$ such that $D_0=D$ and, for all $i$ with $0 \leq i \leq k-1$, each vertex $v$ in $D_{i+1}$ has at least $\tau (v)$ neighbours in the union of $D_0, D_1, \ldots, D_i$. Dynamic monopolies model the spread of influence or propagation of opinion in social networks, where the graph $G$ represents the underlying network. The smallest cardinality of any dynamic monopoly of $(G,\tau )$ is denoted by $\mathrm{dyn}_{\tau }(G)$. In this paper we assume that the threshold of each vertex $v$ of the network is a random variable $X_v$ such that $0\leq X_v \leq \deg _G(v)+1$. We obtain sharp bounds on the expectation and the concentration of $\mathrm{dyn}_{\tau }(G)$ around its mean value. We also obtain some lower bounds for the size of dynamic monopolies in terms of the order of graph and expectation of the thresholds.


2004 ◽  
Vol 23 (1) ◽  
pp. 109-119 ◽  
Author(s):  
Demetrios Vakratsas ◽  
Fred M. Feinberg ◽  
Frank M. Bass ◽  
Gurumurthy Kalyanaram

Sign in / Sign up

Export Citation Format

Share Document