scholarly journals Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Shehzad Khalid ◽  
Sannia Arshad ◽  
Sohail Jabbar ◽  
Seungmin Rho

We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension ofm-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

2020 ◽  
pp. 004051752093957
Author(s):  
Jingan Wang ◽  
Meng Shuo ◽  
Lei Wang ◽  
Fengxin Sun ◽  
Ruru Pan ◽  
...  

Objective fabric smoothness appearance evaluation plays an important role in the textile and apparel industry. In most previous studies, objective fabric smoothness appearance evaluation is defined as a general pattern classification problem. However, the labels in this problem exhibit a natural ordering. Nominal classification ignores the ordinal information, which may cause overfitting in model training. In addition, for the existence of subjective errors, measurement errors, manual errors, etc., the labels in the data might be noisy, which has been rarely discussed previously. This paper proposes an ordinal classification framework based on label noise estimation (OCF-LNE) to objectively evaluate the fabric smoothness appearance degree, which takes the ordinal information and noise of the label in the training data into consideration. The OCF-LNE uses the basic classifier in pre-training as a label noise estimator, and uses the estimated label noise to adjust the labels in further training. The adjusted labels can introduce the ordinal constrain implicitly and reduce the negative impact of label noise in model training. Within a 10 × 10 nested cross-validation, the proposed OCF-LNE achieves 82.86%, 94.29%, and 100% average accuracies under errors of 0, 0.5, and 1 degree, respectively. Experiments on different fabric image features and basic classification models verify the effectiveness of the OCF-LNE. In addition, the proposed method outperforms the state-of-the-art methods for fabric smoothness evaluation and ordinal classification. Promisingly, the OCF-LNE can provide novel ideas for image-based fabric smoothness evaluation.


2021 ◽  
Vol 13 (9) ◽  
pp. 1713
Author(s):  
Songwei Gu ◽  
Rui Zhang ◽  
Hongxia Luo ◽  
Mengyao Li ◽  
Huamei Feng ◽  
...  

Deep learning is an important research method in the remote sensing field. However, samples of remote sensing images are relatively few in real life, and those with markers are scarce. Many neural networks represented by Generative Adversarial Networks (GANs) can learn from real samples to generate pseudosamples, rather than traditional methods that often require more time and man-power to obtain samples. However, the generated pseudosamples often have poor realism and cannot be reliably used as the basis for various analyses and applications in the field of remote sensing. To address the abovementioned problems, a pseudolabeled sample generation method is proposed in this work and applied to scene classification of remote sensing images. The improved unconditional generative model that can be learned from a single natural image (Improved SinGAN) with an attention mechanism can effectively generate enough pseudolabeled samples from a single remote sensing scene image sample. Pseudosamples generated by the improved SinGAN model have stronger realism and relatively less training time, and the extracted features are easily recognized in the classification network. The improved SinGAN can better identify sub-jects from images with complex ground scenes compared with the original network. This mechanism solves the problem of geographic errors of generated pseudosamples. This study incorporated the generated pseudosamples into training data for the classification experiment. The result showed that the SinGAN model with the integration of the attention mechanism can better guarantee feature extraction of the training data. Thus, the quality of the generated samples is improved and the classification accuracy and stability of the classification network are also enhanced.


2021 ◽  
Vol 7 (4) ◽  
pp. 64
Author(s):  
Tanguy Ophoff ◽  
Cédric Gullentops ◽  
Kristof Van Beeck ◽  
Toon Goedemé

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).


2021 ◽  
Author(s):  
Wenjun Yang

This thesis explores features characterizing the temporal dynamics and the use of ensemble techniques to improve the performances of environmental sound recognition (ESR) system. Firstly, for acoustic scene classification (ASC), local binary pattern (LBP) technique is applied to extract the temporal evolution of Mel-frequency cepstral coefficients (MFCC) features, and the D3C ensemble classifier is adopted to optimize the system performance. The results show that the proposed method achieved a classification improvement of 8% compared to the baseline system. Secondly, a new approach for sound event detection (SED) using Nonnegative Matrix Factor 2- D Deconvolution (NMF2D) and RUSBoost techniques is presented. The idea is to capture the two dimensional joint spectral and temporal information from the time-frequency representation (TFR) while possibly separating the sound mixture into several sources. Besides, the RUSBoost ensemble technique is utilized in the event detection process to alleviate class imbalance in the training data. This method reduced the total error rate by 5% compared to the baseline method.


2020 ◽  
Vol 45 (2) ◽  
pp. 184-200
Author(s):  
David Van Bulck ◽  
Dries Goossens ◽  
Jo¨rn Scho¨nberger ◽  
Mario Guajardo

The sports timetabling problem is a combinatorial optimization problem that consists of creating a timetable that defines against whom, when and where teams play games. This is a complex matter, since real-life sports timetabling applications are typically highly constrained. The vast amount and variety of constraints and the lack of generally accepted benchmark problem instances make that timetable algorithms proposed in the literature are often tested on just one or two specific seasons of the competition under consideration. This is problematic since only a few algorithmic insights are gained. To mitigate this issue, this article provides a problem instance repository containing over 40 different types of instances covering artificial and real-life problem instances. The construction of such a repository is not trivial, since there are dozens of constraints that need to be expressed in a standardized format. For this, our repository relies on RobinX, an XML-supported classification framework. The resulting repository provides a (non-exhaustive) overview of most real-life sports timetabling applications published over the last five decades. For every problem, a short description highlights the most distinguishing characteristics of the problem. The repository is publicly available and will be continuously updated as new instances or better solutions become available.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 63 ◽  
Author(s):  
Benjamin Guedj ◽  
Bhargav Srinivasa Desikan

We propose a new supervised learning algorithm for classification and regression problems where two or more preliminary predictors are available. We introduce KernelCobra, a non-linear learning strategy for combining an arbitrary number of initial predictors. KernelCobra builds on the COBRA algorithm introduced by Biau et al. (2016), which combined estimators based on a notion of proximity of predictions on the training data. While the COBRA algorithm used a binary threshold to declare which training data were close and to be used, we generalise this idea by using a kernel to better encapsulate the proximity information. Such a smoothing kernel provides more representative weights to each of the training points which are used to build the aggregate and final predictor, and KernelCobra systematically outperforms the COBRA algorithm. While COBRA is intended for regression, KernelCobra deals with classification and regression. KernelCobra is included as part of the open source Python package Pycobra (0.2.4 and onward), introduced by Srinivasa Desikan (2018). Numerical experiments were undertaken to assess the performance (in terms of pure prediction and computational complexity) of KernelCobra on real-life and synthetic datasets.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1292 ◽  
Author(s):  
Xingdong Li ◽  
Hewei Gao ◽  
Fusheng Zha ◽  
Jian Li ◽  
Yangwei Wang ◽  
...  

This paper is focused on designing a cost function of selecting a foothold for a physical quadruped robot walking on rough terrain. The quadruped robot is modeled with Denavit–Hartenberg (DH) parameters, and then a default foothold is defined based on the model. Time of Flight (TOF) camera is used to perceive terrain information and construct a 2.5D elevation map, on which the terrain features are detected. The cost function is defined as the weighted sum of several elements including terrain features and some features on the relative pose between the default foothold and other candidates. It is nearly impossible to hand-code the weight vector of the function, so the weights are learned using Supporting Vector Machine (SVM) techniques, and the training data set is generated from the 2.5D elevation map of a real terrain under the guidance of experts. Four candidate footholds around the default foothold are randomly sampled, and the expert gives the order of such four candidates by rotating and scaling the view for seeing clearly. Lastly, the learned cost function is used to select a suitable foothold and drive the quadruped robot to walk autonomously across the rough terrain with wooden steps. Comparing to the approach with the original standard static gait, the proposed cost function shows better performance.


2018 ◽  
Vol 275 ◽  
pp. 2374-2383 ◽  
Author(s):  
Maryam Sabzevari ◽  
Gonzalo Martínez-Muñoz ◽  
Alberto Suárez

Sign in / Sign up

Export Citation Format

Share Document