scholarly journals One-Class Classification Using lp-Norm Multiple Kernel Fisher Null-Space

Author(s):  
Shervin Rahimzadeh Arashloo

The paper addresses the one-class classification (OCC) problem and advocates a one-class multiple kernel learning (MKL) approach for this purpose. To this aim, based on the Fisher null-space one-class classification principle, we present a multiple kernel learning algorithm where an $\ell_p$-norm constraint ($p\geq1$) on kernel weights is considered. We cast the proposed one-class MKL task as a min-max saddle point Lagrangian optimisation problem and propose an efficient method to solve it. An extension of the proposed one-class MKL approach is also considered where several related one-class MKL tasks are learned concurrently by constraining them to share common kernel weights. <br>An extensive assessment of the proposed method on a range of data sets from different application domains confirms its merits against the baseline and several other algorithms.<br>

2021 ◽  
Author(s):  
Shervin Rahimzadeh Arashloo

The paper addresses the one-class classification (OCC) problem and advocates a one-class multiple kernel learning (MKL) approach for this purpose. To this aim, based on the Fisher null-space one-class classification principle, we present a multiple kernel learning algorithm where an $\ell_p$-norm constraint ($p\geq1$) on kernel weights is considered. We cast the proposed one-class MKL task as a min-max saddle point Lagrangian optimisation problem and propose an efficient method to solve it. An extension of the proposed one-class MKL approach is also considered where several related one-class MKL tasks are learned concurrently by constraining them to share common kernel weights. <br>An extensive assessment of the proposed method on a range of data sets from different application domains confirms its merits against the baseline and several other algorithms.<br>


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Jinshan Qi ◽  
Xun Liang ◽  
Rui Xu

By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employ L1-norm constraint on the kernel combination weights, which forms a sparse yet nonsmooth solution for the kernel weights. Alternatively, the Lp-norm constraint on the kernel weights keeps all information in the base kernels. Nonetheless, the solution of Lp-norm constraint MKL is nonsparse and sensitive to the noise. Recently, some scholars presented an efficient sparse generalized MKL (L1- and L2-norms based GMKL) method, in which L1  L2 established an elastic constraint on the kernel weights. In this paper, we further extend the GMKL to a more generalized MKL method based on the p-norm, by joining L1- and Lp-norms. Consequently, the L1- and L2-norms based GMKL is a special case in our method when p=2. Experiments demonstrated that our L1- and Lp-norms based MKL offers a higher accuracy than the L1- and L2-norms based GMKL in the classification, while keeping the properties of the L1- and L2-norms based on GMKL.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Wenjia Niu ◽  
Kewen Xia ◽  
Baokai Zu ◽  
Jianchuan Bai

Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.


2018 ◽  
Vol 112 ◽  
pp. 111-117 ◽  
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Linlin Li ◽  
Hongqiao Wang ◽  
Yongqiang Li

Author(s):  
Peiyan Wang ◽  
Dongfeng Cai

Multiple kernel learning (MKL) aims at learning an optimal combination of base kernels with which an appropriate hypothesis is determined on the training data. MKL has its flexibility featured by automated kernel learning, and also reflects the fact that typical learning problems often involve multiple and heterogeneous data sources. Target kernel is one of the most important parts of many MKL methods. These methods find the kernel weights by maximizing the similarity or alignment between weighted kernel and target kernel. The existing target kernels implement a global manner, which (1) defines the same target value for closer and farther sample pairs, and inappropriately neglects the variation of samples; (2) is independent of training data, and is hardly approximated by base kernels. As a result, maximizing the similarity to the global target kernel could make these pre-specified kernels less effectively utilized, further reducing the classification performance. In this paper, instead of defining a global target kernel, a localized target kernel is calculated for each sample pair from the training data, which is flexible and able to well handle the sample variations. A new target kernel named empirical target kernel is proposed in this research to implement this idea, and three corresponding algorithms are designed to efficiently utilize the proposed empirical target kernel. Experiments are conducted on four challenging MKL problems. The results show that our algorithms outperform other methods, verifying the effectiveness and superiority of the proposed methods.


2018 ◽  
Vol 23 (11) ◽  
pp. 3697-3706
Author(s):  
Qingchao Wang ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Linlin Li ◽  
Shuai Huang

2019 ◽  
Vol 23 (5) ◽  
pp. 1990-2001 ◽  
Author(s):  
Vangelis P. Oikonomou ◽  
Spiros Nikolopoulos ◽  
Ioannis Kompatsiaris

Sign in / Sign up

Export Citation Format

Share Document