Generalizing Labeled and Unlabeled Sample Compression to Multi-label Concept Classes

Author(s):  
Rahim Samei ◽  
Boting Yang ◽  
Sandra Zilles
2019 ◽  
Vol 11 (16) ◽  
pp. 1933 ◽  
Author(s):  
Yangyang Li ◽  
Ruoting Xing ◽  
Licheng Jiao ◽  
Yanqiao Chen ◽  
Yingte Chai ◽  
...  

Polarimetric synthetic aperture radar (PolSAR) image classification is a recent technology with great practical value in the field of remote sensing. However, due to the time-consuming and labor-intensive data collection, there are few labeled datasets available. Furthermore, most available state-of-the-art classification methods heavily suffer from the speckle noise. To solve these problems, in this paper, a novel semi-supervised algorithm based on self-training and superpixels is proposed. First, the Pauli-RGB image is over-segmented into superpixels to obtain a large number of homogeneous areas. Then, features that can mitigate the effects of the speckle noise are obtained using spatial weighting in the same superpixel. Next, the training set is expanded iteratively utilizing a semi-supervised unlabeled sample selection strategy that elaborately makes use of spatial relations provided by superpixels. In addition, a stacked sparse auto-encoder is self-trained using the expanded training set to obtain classification results. Experiments on two typical PolSAR datasets verified its capability of suppressing the speckle noise and showed excellent classification performance with limited labeled data.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 79
Author(s):  
Graham Spinks ◽  
Marie-Francine Moens

This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels. The obtained representations are based on distance estimates between the distributions given by the class label and those given by contextual information, which are modeled as environments. We prove that the representations have a clear structure allowing decomposing the representation into factors that represent classes and environments. We evaluate our novel technique on classification and retrieval tasks involving different modalities (visual and language data). In various experiments, we show how the representations can be compressed and how different hyperparameters impact performance.


1995 ◽  
Vol 21 (3) ◽  
pp. 269-304 ◽  
Author(s):  
Sally Floyd ◽  
Manfred Warmuth
Keyword(s):  

2006 ◽  
Vol 18 (03) ◽  
pp. 124-127
Author(s):  
HSIAO-HSUAN CHOU ◽  
YU-CHIEN SHIAU ◽  
TE-SON KUO

We had proposed a novel and fast Electrocardiogram (ECG) signal compression algorithm for non-uniform sampling in time domain [1]. It meets the real-time requirement for clinical application. Moreover, the compression performance is stable and uniform even for abnormal ECG signals. A criterion called sum square difference (SSD) is defined as an error test equation. The algorithm using SSD to calculate error tolerance is applied to the records in MIT-BIH database (with 11-bit resolution and 360 Hz sampling rate). It belongs to the threshold-limited algorithm but [1] does not mention much about this kind of algorithm. In this paper we provide more comparisons among SSD, Fan, scan-along polygonal approximation (SAPA), maximum enclosed area (MEA), and optimization algorithm (OPT) using the two measures called sample compression ratio (SCR) and percent root mean squared difference (PRD) with proper mean offset that [1] does not adopt. The results show SSD outperforms the mentioned algorithms with the same computational complexity O(n). Moreover, the comparison with the best but time-consuming coder OPT (O (n3)) shows how much the algorithm can be improved.


1995 ◽  
Vol 18 (2-3) ◽  
pp. 131-148 ◽  
Author(s):  
Paul W. Goldberg ◽  
Mark R. Jerrum
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document