scholarly journals A rotation based regularization method for semi-supervised learning

Author(s):  
Prashant Shukla ◽  
Abhishek ◽  
Shekhar Verma ◽  
Manish Kumar
2019 ◽  
Vol 10 (1) ◽  
pp. 64
Author(s):  
Yi Lin ◽  
Honggang Zhang

In the era of Big Data, multi-instance learning, as a weakly supervised learning framework, has various applications since it is helpful to reduce the cost of the data-labeling process. Due to this weakly supervised setting, learning effective instance representation/embedding is challenging. To address this issue, we propose an instance-embedding regularizer that can boost the performance of both instance- and bag-embedding learning in a unified fashion. Specifically, the crux of the instance-embedding regularizer is to maximize correlation between instance-embedding and underlying instance-label similarities. The embedding-learning framework was implemented using a neural network and optimized in an end-to-end manner using stochastic gradient descent. In experiments, various applications were studied, and the results show that the proposed instance-embedding-regularization method is highly effective, having state-of-the-art performance.


Author(s):  
Yalong Song ◽  
Hong Li ◽  
Jianzhong Wang ◽  
Kit Ian Kou

In this paper, we present a novel multiple 1D-embedding based clustering (M1DEBC) scheme for hyperspectral image (HSI) classification. This novel clustering scheme is an iteration algorithm of 1D-embedding based regularization, which is first proposed by J. Wang [Semi-supervised learning using ensembles of multiple 1D-embedding-based label boosting, Int. J. Wavelets[Formula: see text] Multiresolut. Inf. Process. 14(2) (2016) 33 pp.; Semi-supervised learning using multiple one-dimensional embedding-based adaptive interpolation, Int. J. Wavelets[Formula: see text] Multiresolut. Inf. Process. 14(2) (2016) 11 pp.]. In the algorithm, at each iteration, we do the following three steps. First, we construct a 1D multi-embedding, which contains [Formula: see text] different versions of 1D embedding. Each of them is realized by an isometric mapping that maps all the pixels in a HSI into a line such that the sum of the distances of adjacent pixels in the original space is minimized. Second, for each 1D embedding, we use the regularization method to find a pre-classifier to give each unlabeled sample a preliminary label. If all of the [Formula: see text] different versions of regularization vote the same preliminary label, then we call it a feasible confident sample. All the feasible confident samples and their corresponding labels constitute the auxiliary set. We randomly select a part of the elements from the auxiliary set to construct the newborn labeled set. Finally, we add the newborn labeled set into the labeled sample set. Thus, the labeled sample set is gradually enlarged in the process of the iteration. The iteration terminates until the updated labeled set reaches a certain size. Our experimental results on real hyperspectral datasets confirm the effectiveness of the proposed scheme.


2020 ◽  
Vol 34 (04) ◽  
pp. 6380-6387
Author(s):  
Hanwei Wu ◽  
Markus Flierl

Autoencoders and their variations provide unsupervised models for learning low-dimensional representations for downstream tasks. Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon. In this paper, we introduce a quantization-based regularizer in the bottleneck stage of autoencoder models to learn meaningful latent representations. We combine both perspectives of Vector Quantized-Variational AutoEncoders (VQ-VAE) and classical denoising regularization methods of neural networks. We interpret quantizers as regularizers that constrain latent representations while fostering a similarity-preserving mapping at the encoder. Before quantization, we impose noise on the latent codes and use a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder, and thus, is performing soft quantization of the noisy latent codes. We show that our proposed regularization method results in improved latent representations for both supervised learning and clustering downstream tasks when compared to autoencoders using other bottleneck structures.


2018 ◽  
Vol 2018 (15) ◽  
pp. 132-1-1323
Author(s):  
Shijie Zhang ◽  
Zhengtian Song ◽  
G. M. Dilshan P. Godaliyadda ◽  
Dong Hye Ye ◽  
Atanu Sengupta ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document