scholarly journals An unsupervised learning method with convolutional auto-encoder for vessel trajectory similarity computation

2021 ◽  
Vol 225 ◽  
pp. 108803
Author(s):  
Maohan Liang ◽  
Ryan Wen Liu ◽  
Shichen Li ◽  
Zhe Xiao ◽  
Xin Liu ◽  
...  
2011 ◽  
Vol 16 (1) ◽  
pp. 31-38
Author(s):  
Sang-Moo Park ◽  
Seong-Jin Kim ◽  
Dong-Hyung Lee ◽  
Soo-Dong Lee ◽  
Cheol-Young Ock

Author(s):  
Jan Žižka ◽  
František Dařena

The automated categorization of unstructured textual documents according to their semantic contents plays important role particularly linked with the ever growing volume of such data originating from the Internet. Having a sufficient number of labeled examples, a suitable supervised machine learning-based classifier can be trained. When no labeling is available, an unsupervised learning method can be applied, however, the missing label information often leads to worse classification results. This chapter demonstrates a method based on semi-supervised learning when a smallish set of manually labeled examples improves the categorization process in comparison with clustering, and the results are comparable with the supervised learning output. For the illustration, a real-world dataset coming from the Internet is used as the input of the supervised, unsupervised, and semi-supervised learning. The results are shown for different number of the starting labeled samples used as “seeds” to automatically label the remaining volume of unlabeled items.


Author(s):  
Deepak Babu Sam ◽  
Neeraj N Sajjan ◽  
Himanshu Maurya ◽  
R. Venkatesh Babu

We present an unsupervised learning method for dense crowd count estimation. Marred by large variability in appearance of people and extreme overlap in crowds, enumerating people proves to be a difficult task even for humans. This implies creating large-scale annotated crowd data is expensive and directly takes a toll on the performance of existing CNN based counting models on account of small datasets. Motivated by these challenges, we develop Grid Winner-Take-All (GWTA) autoencoder to learn several layers of useful filters from unlabeled crowd images. Our GWTA approach divides a convolution layer spatially into a grid of cells. Within each cell, only the maximally activated neuron is allowed to update the filter. Almost 99.9% of the parameters of the proposed model are trained without any labeled data while the rest 0.1% are tuned with supervision. The model achieves superior results compared to other unsupervised methods and stays reasonably close to the accuracy of supervised baseline. Furthermore, we present comparisons and analyses regarding the quality of learned features across various models.


Sign in / Sign up

Export Citation Format

Share Document