kernel trick
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 15)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xi Liu ◽  
Zengrong Zhan ◽  
Guo Niu

Image recognition tasks involve an increasingly high amount of symmetric positive definite (SPD) matrices data. SPD manifolds exhibit nonlinear geometry, and Euclidean machine learning methods cannot be directly applied to SPD manifolds. The kernel trick of SPD manifolds is based on the concept of projecting data onto a reproducing kernel Hilbert space. Unfortunately, existing kernel methods do not consider the connection of SPD matrices and linear projections. Thus, a framework that uses the correlation between SPD matrices and projections to model the kernel map is proposed herein. To realize this, this paper formulates a Hilbert–Schmidt independence criterion (HSIC) regularization framework based on the kernel trick, where HSIC is usually used to express the interconnectedness of two datasets. The proposed framework allows us to extend the existing kernel methods to new HSIC regularization kernel methods. Additionally, this paper proposes an algorithm called HSIC regularized graph discriminant analysis (HRGDA) for SPD manifolds based on the HSIC regularization framework. The proposed HSIC regularization framework and HRGDA are highly accurate and valid based on experimental results on several classification tasks.


2021 ◽  
Vol 31 (5) ◽  
Author(s):  
Joachim Schreurs ◽  
Iwein Vranckx ◽  
Mia Hubert ◽  
Johan A. K. Suykens ◽  
Peter J. Rousseeuw

AbstractThe minimum regularized covariance determinant method (MRCD) is a robust estimator for multivariate location and scatter, which detects outliers by fitting a robust covariance matrix to the data. Its regularization ensures that the covariance matrix is well-conditioned in any dimension. The MRCD assumes that the non-outlying observations are roughly elliptically distributed, but many datasets are not of that form. Moreover, the computation time of MRCD increases substantially when the number of variables goes up, and nowadays datasets with many variables are common. The proposed kernel minimum regularized covariance determinant (KMRCD) estimator addresses both issues. It is not restricted to elliptical data because it implicitly computes the MRCD estimates in a kernel-induced feature space. A fast algorithm is constructed that starts from kernel-based initial estimates and exploits the kernel trick to speed up the subsequent computations. Based on the KMRCD estimates, a rule is proposed to flag outliers. The KMRCD algorithm performs well in simulations, and is illustrated on real-life data.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Takeru Kusumoto ◽  
Kosuke Mitarai ◽  
Keisuke Fujii ◽  
Masahiro Kitagawa ◽  
Makoto Negoro

AbstractThe kernel trick allows us to employ high-dimensional feature space for a machine learning task without explicitly storing features. Recently, the idea of utilizing quantum systems for computing kernel functions using interference has been demonstrated experimentally. However, the dimension of feature spaces in those experiments have been smaller than the number of data, which makes them lose their computational advantage over explicit method. Here we show the first experimental demonstration of a quantum kernel machine that achieves a scheme where the dimension of feature space greatly exceeds the number of data using 1H nuclear spins in solid. The use of NMR allows us to obtain the kernel values with single-shot experiment. We employ engineered dynamics correlating 25 spins which is equivalent to using a feature space with a dimension over 1015. This work presents a quantum machine learning using one of the largest quantum systems to date.


2021 ◽  
Author(s):  
Hessam Ahmadi ◽  
Emad Fatemizadeh ◽  
Ali Motie Nasrabadi

Abstract Neuroimaging data analysis reveals the underlying interactions in the brain. It is essential, yet controversial, to choose a proper tool to manifest brain functional connectivity. In this regard, researchers have not reached a definitive conclusion between the linear and non-linear approaches, as both have pros and cons. In this study, to evaluate this concern, the functional Magnetic Resonance Imaging (fMRI) data of different stages of Alzheimer’s disease are investigated. In the linear approach, the Pearson Correlation Coefficient (PCC) is employed as a common technique to generate brain functional graphs. On the other hand, for non-linear approaches, two methods including Distance Correlation (DC) and the kernel trick are utilized. By the use of the three mentioned routines and graph theory, functional brain networks of all stages of Alzheimer’s disease (AD) are constructed and then sparsed. Afterwards, graph global measures are calculated over the networks and a non-parametric permutation test is conducted. Results reveal that the non-linear approaches have more potential to discriminate groups in all stages of AD. Moreover, the kernel trick method is more powerful in comparison to the DC technique. Nevertheless, AD degenerates the brain functional graphs more at the beginning stages of the disease. At the first phase, both functional integration and segregation of the brain degrades, and as AD progressed brain functional segregation further declines. The most distinguishable feature in all stages is the clustering coefficient that reflects brain functional segregation.


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1195 ◽  
Author(s):  
Qing Ai ◽  
Anna Wang ◽  
Aihua Zhang ◽  
Wenhui Wang ◽  
Yang Wang

Twin-KSVC (Twin Support Vector Classification for K class) is a novel and efficient multiclass twin support vector machine. However, Twin-KSVC has the following disadvantages. (1) Each pair of binary sub-classifiers has to calculate inverse matrices. (2) For nonlinear problems, a pair of additional primal problems needs to be constructed in each pair of binary sub-classifiers. For these disadvantages, a new multi-class twin hypersphere support vector machine, named Twin Hypersphere-KSVC, is proposed in this paper. Twin Hypersphere-KSVC also evaluates each sample into 1-vs-1-vs-rest structure, as in Twin-KSVC. However, our Twin Hypersphere-KSVC does not seek two nonparallel hyperplanes in each pair of binary sub-classifiers as in Twin-KSVC, but a pair of hyperspheres. Compared with Twin-KSVC, Twin Hypersphere-KSVC avoids computing inverse matrices, and for nonlinear problems, can apply the kernel trick to linear case directly. A large number of comparisons of Twin Hypersphere-KSVC with Twin-KSVC on a set of benchmark datasets from the UCI repository and several real engineering applications, show that the proposed algorithm has higher training speed and better generalization performance.


Sign in / Sign up

Export Citation Format

Share Document