kernel mapping
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 1)

2020 ◽  
Vol 206 ◽  
pp. 106359
Author(s):  
Kaixin Yuan ◽  
Jing Liu ◽  
Shanchao Yang ◽  
Kai Wu ◽  
Fang Shen

2020 ◽  
Vol 6 (4) ◽  
pp. 467-476
Author(s):  
Xinxin Liu ◽  
Yunfeng Zhang ◽  
Fangxun Bao ◽  
Kai Shao ◽  
Ziyi Sun ◽  
...  

AbstractThis paper proposes a kernel-blending connection approximated by a neural network (KBNN) for image classification. A kernel mapping connection structure, guaranteed by the function approximation theorem, is devised to blend feature extraction and feature classification through neural network learning. First, a feature extractor learns features from the raw images. Next, an automatically constructed kernel mapping connection maps the feature vectors into a feature space. Finally, a linear classifier is used as an output layer of the neural network to provide classification results. Furthermore, a novel loss function involving a cross-entropy loss and a hinge loss is proposed to improve the generalizability of the neural network. Experimental results on three well-known image datasets illustrate that the proposed method has good classification accuracy and generalizability.


Author(s):  
Hui Xue ◽  
Zheng-Fan Wu

Recently, deep spectral kernel networks (DSKNs) have attracted wide attention. They consist of periodic computational elements that can be activated across the whole feature spaces. In theory, DSKNs have the potential to reveal input-dependent and long-range characteristics, and thus are expected to perform more competitive than prevailing networks. But in practice, they are still unable to achieve the desired effects. The structural superiority of DSKNs comes at the cost of the difficult optimization. The periodicity of computational elements leads to many poor and dense local minima in loss landscapes. DSKNs are more likely stuck in these local minima, and perform worse than expected. Hence, in this paper, we propose the novel Bayesian random Kernel mapping Networks (BaKer-Nets) with preferable learning processes by escaping randomly from most local minima. Specifically, BaKer-Nets consist of two core components: 1) a prior-posterior bridge is derived to enable the uncertainty of computational elements reasonably; 2) a Bayesian learning paradigm is presented to optimize the prior-posterior bridge efficiently. With the well-tuned uncertainty, BaKer-Nets can not only explore more potential solutions to avoid local minima, but also exploit these ensemble solutions to strengthen their robustness. Systematical experiments demonstrate the significance of BaKer-Nets in improving learning processes on the premise of preserving the structural superiority.


2020 ◽  
Vol 10 (7) ◽  
pp. 2348
Author(s):  
Zhaoying Liu ◽  
Haipeng Kan ◽  
Ting Zhang ◽  
Yujian Li

This paper mainly deals with the problem of short text classification. There are two main contributions. Firstly, we introduce a framework of deep uniform kernel mapping support vector machine (DUKMSVM). The significant merit of this framework is that by expressing the kernel mapping function explicitly with a deep neural network, it is in essence an explicit kernel mapping instead of the traditional kernel function, and it allows better flexibility in dealing with various applications by applying different neural network structures. Secondly, to validate the effectiveness of this framework and to improve the performance of short text classification, we explicitly express the kernel mapping using bidirectional recurrent neural network (BRNN), and propose a deep bidirectional recurrent kernel mapping support vector machine (DRKMSVM) for short text classification. Experimental results on five public short text classification datasets indicate that in terms of classification accuracy, precision, recall rate and F1-score, the DRKMSVM achieves the best performance with the average values of accuracy, precision, recall rate, and F1-score of 87.23%, 86.99%, 86.13% and 86.51% respectively compared to traditional SVM, convolutional neural network (CNN), Naive Bayes (NB), and Deep Neural Mapping Support Vector Machine (DNMSVM) which applies multi-layer perceptron for kernel mapping.


2020 ◽  
Vol 53 (5-6) ◽  
pp. 994-1006
Author(s):  
Yongyong Hui ◽  
Xiaoqiang Zhao

Aiming at the dynamic and nonlinear characteristics of batch process, a multiway dynamic nonlinear global neighborhood preserving embedding algorithm is proposed. For the nonlinear batch process monitoring, kernel mapping is widely used to eliminate nonlinearity by projecting the data into high-dimensional space, but the nonlinear relationships between batch process variables are limited by many physical constraints, and the infinite-order mapping is inefficient and redundant. Compared with the basic kernel mapping method which provides an infinite-order nonlinear mapping, the proposed method considers the dynamic and nonlinear characteristics with many physical constraints and preserves the global and local structures concurrently. First, the time-lagged window is used to remove the auto-correlation in time series of process variables. Second, a nonlinear method named constructive polynomial mapping is used to avoid unnecessary redundancy and reduce computational complexity. Third, the global neighborhood preserving embedding method is used to extract structures fully after the dynamic and nonlinear characteristics are processed. Finally, the effects of the proposed algorithm are demonstrated by a mathematical model and the penicillin fermentation process.


2020 ◽  
Vol 18 (04) ◽  
pp. 683-696
Author(s):  
Gilles Blanchard ◽  
Nicole Mücke

We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 195046-195061
Author(s):  
Ahmad Khusro ◽  
Saddam Husain ◽  
Mohammad S. Hashmi ◽  
Abdul Quaiyum Ansari ◽  
Sultangali Arzykulov

2019 ◽  
Vol 18 (1) ◽  
Author(s):  
Xinying Yu ◽  
Bo Peng ◽  
Zeyu Xue ◽  
Hamidreza Saligheh Rad ◽  
Zhenlin Cai ◽  
...  

Abstract Background Hypertension increases the risk of angiocardiopathy and cognitive disorder. Blood pressure has four categories: normal, elevated, hypertension stage 1 and hypertension stage 2. The quantitative analysis of hypertension helps determine disease status, prognosis assessment, guidance and management, but is not well studied in the framework of machine learning. Methods We proposed empirical kernel mapping-based kernel extreme learning machine plus (EKM–KELM+) classifier to discriminate different blood pressure grades in adults from structural brain MR images. ELM+ is the extended version of ELM, which integrates the additional privileged information about training samples in ELM to help train a more effective classifier. In this work, we extracted gray matter volume (GMV), white matter volume, cerebrospinal fluid volume, cortical surface area, cortical thickness from structural brain MR images, and constructed brain network features based on thickness. After feature selection and EKM, the enhanced features are obtained. Then, we select one feature type as the main feature to feed into KELM+, and the rest of the feature types are PI to assist the main feature to train 5 KELM+ classifiers. Finally, the 5 KELM+ classifiers are ensemble to predict classification result in the test stage, while PI is not used during testing. Results We evaluated the performance of the proposed EKM–KELM+ method using four grades of hypertension data (73 samples for each grade). The experimental results show that the GMV performs observably better than any other feature types with a comparatively higher classification accuracy of 77.37% (Grade 1 vs. Grade 2), 93.19% (Grade 1 vs. Grade 3), and 95.15% (Grade 1 vs. Grade 4). The most discriminative brain regions found using our method are olfactory, orbitofrontal cortex (inferior), supplementary motor area, etc. Conclusions Using region of interest features and brain network features, EKM–KELM+ is proposed to study the most discriminative regions that have obvious structural changes in different blood pressure grades. The discriminative features that are selected using our method are consistent with the existing neuroimaging studies. Moreover, our study provides a potential approach to take effective interventions in the early period, when the blood pressure makes minor impacts on the brain structure and function.


Sign in / Sign up

Export Citation Format

Share Document