A deep survey on supervised learning based human detection and activity classification methods

Author(s):  
Muhammad Attique Khan ◽  
Mamta Mittal ◽  
Lalit Mohan Goyal ◽  
Sudipta Roy
Algorithms ◽  
2018 ◽  
Vol 11 (9) ◽  
pp. 139 ◽  
Author(s):  
Ioannis Livieris ◽  
Andreas Kanavos ◽  
Vassilis Tampakas ◽  
Panagiotis Pintelas

Semi-supervised learning algorithms have become a topic of significant research as an alternative to traditional classification methods which exhibit remarkable performance over labeled data but lack the ability to be applied on large amounts of unlabeled data. In this work, we propose a new semi-supervised learning algorithm that dynamically selects the most promising learner for a classification problem from a pool of classifiers based on a self-training philosophy. Our experimental results illustrate that the proposed algorithm outperforms its component semi-supervised learning algorithms in terms of accuracy, leading to more efficient, stable and robust predictive models.


2017 ◽  
Vol 26 (02) ◽  
pp. 1750001 ◽  
Author(s):  
Stamatis Karlos ◽  
Nikos Fazakis ◽  
Sotiris Kotsiantis ◽  
Kyriakos Sgarbas

The most important characteristic of semi-supervised learning methods is the combination of available unlabeled data along with an enough smaller set of labeled examples, so as to increase the learning accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. In this work, we have implemented a hybrid Self-trained system that combines a Support Vector Machine, a Decision Tree, a Lazy Learner and a Bayesian algorithm using a Stacking variant methodology. We performed an in depth comparison with other well-known Semi-Supervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Hina Anwar ◽  
Usman Qamar ◽  
Abdul Wahab Muzaffar Qureshi

Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.


2017 ◽  
Vol 25 (5) ◽  
pp. 1078-1089 ◽  
Author(s):  
Juan Antonio Morente-Molinera ◽  
Jozsef Mezei ◽  
Christer Carlsson ◽  
Enrique Herrera-Viedma

2021 ◽  
pp. 1-13
Author(s):  
Zhi Yang ◽  
Haitao Gan ◽  
Xuan Li ◽  
Cong Wu

Since label noise can hurt the performance of supervised learning (SL), how to train a good classifier to deal with label noise is an emerging and meaningful topic in machine learning field. Although many related methods have been proposed and achieved promising performance, they have the following drawbacks: (1) they can lead to data waste and even performance degradation if the mislabeled instances are removed; and (2) the negative effect of the extremely mislabeled instances cannot be completely eliminated. To address these problems, we propose a novel method based on the capped ℓ1 norm and a graph-based regularizer to deal with label noise. In the proposed algorithm, we utilize the capped ℓ1 norm instead of the ℓ1 norm. The used norm can inherit the advantage of the ℓ1 norm, which is robust to label noise to some extent. Moreover, the capped ℓ1 norm can adaptively find extremely mislabeled instances and eliminate the corresponding negative influence. Additionally, the proposed algorithm makes full use of the mislabeled instances under the graph-based framework. It can avoid wasting collected instance information. The solution of our algorithm can be achieved through an iterative optimization approach. We report the experimental results on several UCI datasets that include both binary and multi-class problems. The results verified the effectiveness of the proposed algorithm in comparison to existing state-of-the-art classification methods.


Sign in / Sign up

Export Citation Format

Share Document