scholarly journals Hilbert–Schmidt Independence Criterion Regularization Kernel Framework on Symmetric Positive Definite Manifolds

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xi Liu ◽  
Zengrong Zhan ◽  
Guo Niu

Image recognition tasks involve an increasingly high amount of symmetric positive definite (SPD) matrices data. SPD manifolds exhibit nonlinear geometry, and Euclidean machine learning methods cannot be directly applied to SPD manifolds. The kernel trick of SPD manifolds is based on the concept of projecting data onto a reproducing kernel Hilbert space. Unfortunately, existing kernel methods do not consider the connection of SPD matrices and linear projections. Thus, a framework that uses the correlation between SPD matrices and projections to model the kernel map is proposed herein. To realize this, this paper formulates a Hilbert–Schmidt independence criterion (HSIC) regularization framework based on the kernel trick, where HSIC is usually used to express the interconnectedness of two datasets. The proposed framework allows us to extend the existing kernel methods to new HSIC regularization kernel methods. Additionally, this paper proposes an algorithm called HSIC regularized graph discriminant analysis (HRGDA) for SPD manifolds based on the HSIC regularization framework. The proposed HSIC regularization framework and HRGDA are highly accurate and valid based on experimental results on several classification tasks.

Author(s):  
Bhanu Chander

The basic idea of artificial intelligence and machine learning is that machines have the talent to learn from data, previous experience, and perform the work in future consequences. In the era of the digitalized world which holds big data has long-established machine learning methods consistently with requisite high-quality computational resources in numerous useful and realistic tasks. At the same time, quantum machine learning methods work exponentially faster than their counterparts by making use of quantum mechanics. Through taking advantage of quantum effects such as interference or entanglement, quantum computers can proficiently explain selected issues that are supposed to be tough for traditional machines. Quantum computing is unexpectedly related to that of kernel methods in machine learning. Hence, this chapter provides quantum computation, advance of QML techniques, QML kernel space and optimization, and future work of QML.


2021 ◽  
Vol 15 (5) ◽  
Author(s):  
Monika Drewnik ◽  
Tomasz Miller ◽  
Zbigniew Pasternak-Winiarski

AbstractThe aim of the paper is to create a link between the theory of reproducing kernel Hilbert spaces (RKHS) and the notion of a unitary representation of a group or of a groupoid. More specifically, it is demonstrated on one hand how to construct a positive definite kernel and an RKHS for a given unitary representation of a group(oid), and on the other hand how to retrieve the unitary representation of a group or a groupoid from a positive definite kernel defined on that group(oid) with the help of the Moore–Aronszajn theorem. The kernel constructed from the group(oid) representation is inspired by the kernel defined in terms of the convolution of functions on a locally compact group. Several illustrative examples of reproducing kernels related with unitary representations of groupoids are discussed in detail. The paper is concluded with the brief overview of the possible applications of the proposed constructions.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xi Liu ◽  
Peng Yang ◽  
Zengrong Zhan ◽  
Zhengming Ma

The region covariance descriptor (RCD), which is known as a symmetric positive definite (SPD) matrix, is commonly used in image representation. As SPD manifolds have a non-Euclidean geometry, Euclidean machine learning methods are not directly applicable to them. In this work, an improved covariance descriptor called the hybrid region covariance descriptor (HRCD) is proposed. The HRCD incorporates the mean feature information into the RCD to improve the latter’s discriminative performance. To address the non-Euclidean properties of SPD manifolds, this study also proposes an algorithm called the Hilbert-Schmidt independence criterion subspace learning (HSIC-SL) for SPD manifolds. The HSIC-SL algorithm is aimed at improving classification accuracy. This algorithm is a kernel function that embeds SPD matrices into the reproducing kernel Hilbert space and further maps them to a linear space. To make the mapping consider the correlation between SPD matrices and linear projection, this method introduces global HSIC maximization to the model. The proposed method is compared with existing methods and is proved to be highly accurate and valid by classification experiments on the HRCD and HSIC-SL using the COIL-20, ETH-80, QMUL, face data FERET, and Brodatz datasets.


Author(s):  
Vahid Jalali ◽  
David Leake ◽  
Najmeh Forouzandehmehr

The ability of case-based reasoning systems to solve novel problems depends on their capability to adapt past solutions to new circumstances. However, acquiring the knowledge required for case adaptation is a classic challenge for CBR. This motivates the use of machine learning methods to generate adaptation knowledge. A popular approach uses the case difference heuristic (CDH) to generate adaptation rules from pairs of cases in the case base, based on the premise that the observed differences in case solutions result from the differences in the problems they solve, so can form the basic of rules to adapt cases with similar problem differences. Extensive research has successfully applied the CDH approach to adaptation rule learning for case-based regression (numerical prediction) tasks. However, classification tasks have been outside of its scope. The work presented in this paper addresses that gap by extending CDH-based learning of adaptation rules to apply to cases with categorical features and solutions. It presents the generalized case value heuristic to assess case and solution differences and applies it in an ensemble-based case-based classification method, ensembles of adaptations for classification (EAC), built on the authors' previous work on ensembles of adaptations for regression (EAR). Experimental results support the effectiveness of EAC.


2016 ◽  
Vol 14 (06) ◽  
pp. 795-808 ◽  
Author(s):  
Andreas Christmann ◽  
Florian Dumpert ◽  
Dao-Hong Xiang

Statistical machine learning plays an important role in modern statistics and computer science. One main goal of statistical machine learning is to provide universally consistent algorithms, i.e. the estimator converges in probability or in some stronger sense to the Bayes risk or to the Bayes decision function. Kernel methods based on minimizing the regularized risk over a reproducing kernel Hilbert space (RKHS) belong to these statistical machine learning methods. It is in general unknown which kernel yields optimal results for a particular data set or for the unknown probability measure. Hence various kernel learning methods were proposed to choose the kernel and therefore also its RKHS in a data adaptive manner. Nevertheless, many practitioners often use the classical Gaussian RBF kernel or certain Sobolev kernels with good success. The goal of this paper is to offer one possible theoretical explanation for this empirical fact.


2018 ◽  
Vol 16 (06) ◽  
pp. 1850026
Author(s):  
Qiangrong Jiang ◽  
Jiajia Ma

Considering the classification of compounds as a nonlinear problem, the use of kernel methods is a good choice. Graph kernels provide a nice framework combining machine learning methods with graph theory, whereas the essence of graph kernels is to compare the substructures of two graphs, how to extract the substructures is a question. In this paper, we propose a novel graph kernel based on matrix named the local block kernel, which can compare the similarity of partial substructures that contain any number of vertexes. The paper finally tests the efficacy of this novel graph kernel in comparison with a number of published mainstream methods and results with two datasets: NCI1 and NCI109 for the convenience of comparison.


2021 ◽  
Vol 8 (4) ◽  
pp. 726-735
Author(s):  
S. Lyaqini ◽  
◽  
M. Nachaoui ◽  

This paper deals with a machine-learning model arising from the healthcare sector, namely diabetes progression. The model is reformulated into a regularized optimization problem. The term of the fidelity is the L1 norm and the optimization space of the minimum is constructed by a reproducing kernel Hilbert space (RKSH). The numerical approximation of the model is realized by the Adam method, which shows its success in the numerical experiments (if compared to the stochastic gradient descent (SGD) algorithm).


Sign in / Sign up

Export Citation Format

Share Document