A Version Space Perspective on Differentially Private Pool-Based Active Learning

Author(s):  
Shantanu Rane ◽  
Alejandro E. Brito
2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Tuozhong Yao ◽  
Wenfeng Wang ◽  
Yuhong Gu

Multiview active learning (MAL) is a technique which can achieve a large decrease in the size of the version space than traditional active learning and has great potential applications in large-scale data analysis. In this paper, we present a new deep multiview active learning (DMAL) framework which is the first to combine multiview active learning and deep learning for annotation effort reduction. In this framework, our approach advances the existing active learning methods in two aspects. First, we incorporate two different deep convolutional neural networks into active learning which uses multiview complementary information to improve the feature learnings. Second, through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. The experiments with two challenging image datasets demonstrate that our proposed DMAL algorithm can achieve promising results than several state-of-the-art active learning algorithms.


Author(s):  
JUN LONG ◽  
JIANPING YIN ◽  
EN ZHU ◽  
WENTAO ZHAO

Active learning is an important approach to reduce data-collection costs for inductive learning problems by sampling only the most informative instances for labeling. We focus here on the sampling criterion for how to select these most informative instances. Three contributions are made in this paper. First, in contrast to the leading sampling strategy of halving the volume of version space, we present the sampling strategy of reducing the volume of version space by more than half with the assumption of target function being chosen from nonuniform distribution over version space. Second, we propose the idea of sampling the instances that would be most possibly misclassified. Third, we develop a sampling method named CBMPMS (Committee Based Most Possible Misclassification Sampling) which samples the instances that have the largest probability to be misclassified by the current classifier. Comparing the proposed CBMPMS method with the existing active learning methods, when the classifiers achieve the same accuracy, the former method will sample fewer times than the latter ones. The experiments show that the proposed method outperforms the traditional sampling methods on most selected datasets.


2017 ◽  
Vol 85 (8) ◽  
pp. 814-825 ◽  
Author(s):  
Ajeng J. Puspitasari ◽  
Jonathan W. Kanter ◽  
Andrew M. Busch ◽  
Rachel Leonard ◽  
Shira Dunsiger ◽  
...  

2008 ◽  
Author(s):  
Lisa Wagner ◽  
Chandra M. Mehrotra
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document