Instance-Based Representation Using Multiple Kernel Learning for Predicting Conversion to Alzheimer Disease

2019 ◽  
Vol 29 (02) ◽  
pp. 1850042 ◽  
Author(s):  
D. Collazos-Huertas ◽  
D. Cárdenas-Peña ◽  
G. Castellanos-Dominguez

The early detection of Alzheimer’s disease and quantification of its progression poses multiple difficulties for machine learning algorithms. Two of the most relevant issues are related to missing data and results interpretability. To deal with both issues, we introduce a methodology to predict conversion of mild cognitive impairment patients to Alzheimer’s from structural brain MRI volumes. First, we use morphological measures of each brain structure to build an instance-based feature mapping that copes with missed follow-up visits. Then, the extracted multiple feature mappings are combined into a single representation through the convex combination of reproducing kernels. The weighting parameters per structure are tuned based on the maximization of the centered-kernel alignment criterion. We evaluate the proposed methodology on a couple of well-known classification machines employing the ADNI database devoted to assessing the combined prognostic value of several AD biomarkers. The obtained experimental results show that our proposed method of Instance-based representation using multiple kernel learning enables detecting mild cognitive impairment as well as predicting conversion to Alzheimers disease within three years from the initial screening. Besides, the brain structures with larger combination weights are directly related to memory and cognitive functions.

2018 ◽  
Vol 30 (3) ◽  
pp. 820-855 ◽  
Author(s):  
Wei Wang ◽  
Hao Wang ◽  
Chen Zhang ◽  
Yang Gao

Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods.


2012 ◽  
Vol 24 (7) ◽  
pp. 1853-1881 ◽  
Author(s):  
Hideitsu Hino ◽  
Nima Reyhani ◽  
Noboru Murata

Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to various classifiers including Fisher discriminant analysis (FDA). FDA gives the Bayes optimal classification axis if the data distribution of each class in the feature space is a gaussian with a shared covariance structure. Based on this fact, an MKL framework based on the notion of gaussianity is proposed. As a concrete implementation, an empirical characteristic function is adopted to measure gaussianity in the feature space associated with a convex combination of kernel functions, and two MKL algorithms are derived. From experimental results on some data sets, we show that the proposed kernel learning followed by FDA offers strong classification power.


Author(s):  
Guo ◽  
Xiaoqian Zhang ◽  
Zhigui Liu ◽  
Xuqian Xue ◽  
Qian Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document