scholarly journals Knowledge-Based Green's Kernel for Support Vector Regression

2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Tahir Farooq ◽  
Aziz Guergachi ◽  
Sridhar Krishnan

This paper presents a novel prior knowledge-based Green's kernel for support vector regression (SVR). After reviewing the correspondence between support vector kernels used in support vector machines (SVMs) and regularization operators used in regularization networks and the use of Green's function of their corresponding regularization operators to construct support vector kernels, a mathematical framework is presented to obtain the domain knowledge about magnitude of the Fourier transform of the function to be predicted and design a prior knowledge-based Green's kernel that exhibits optimal regularization properties by using the concept of matched filters. The matched filter behavior of the proposed kernel function makes it suitable for signals corrupted with noise that includes many real world systems. We conduct several experiments mostly using benchmark datasets to compare the performance of our proposed technique with the results already published in literature for other existing support vector kernel over a variety of settings including different noise levels, noise models, loss functions, and SVM variations. Experimental results indicate that knowledge-based Green's kernel could be seen as a good choice among the other candidate kernel functions.

2021 ◽  
Author(s):  
Tahir Farooq

This thesis presents a novel prior knowledge based Green's kernel for support vector regression (SVR) and provides an empirical investigation of SVM's (support vector machines) ability to model complex real world problems using a real dataset. After reviewing the theoretical background such as theory SVM, the correspondence between kernels functions used in SVM and regularization operators used in regularization networks as well as the use of Green's function of their corresponding regularization operators to construct kernel functions for SVM, a mathematical framework is presented to obtain the domain knowledge about the magnitude of the Fourier transform of the function to be predicted and design a prior knowledge based Green's kernel that exhibits optimal regularization properties by using the concept of matched filters. The matched filter behavior of the proposed kernel function provides the optimal regularization and also makes it suitable for signals corrupted with noise that includes many real world systems. Several experiments, mostly using benchmark datasets ranging from simple regression models to non-linear and high dimensional chaotic time series, have been conducted in order to compare the performance of the proposed technique with the results already published in the literature for other existing support vector kernels over a variety of settings including different noise levels, noise models, loss functions and SVM variations. The proposed kernel function improves the best known results by 18.6% and 24.4% on a benchmark dataset for two different experimental settings.


2021 ◽  
Author(s):  
Tahir Farooq

This thesis presents a novel prior knowledge based Green's kernel for support vector regression (SVR) and provides an empirical investigation of SVM's (support vector machines) ability to model complex real world problems using a real dataset. After reviewing the theoretical background such as theory SVM, the correspondence between kernels functions used in SVM and regularization operators used in regularization networks as well as the use of Green's function of their corresponding regularization operators to construct kernel functions for SVM, a mathematical framework is presented to obtain the domain knowledge about the magnitude of the Fourier transform of the function to be predicted and design a prior knowledge based Green's kernel that exhibits optimal regularization properties by using the concept of matched filters. The matched filter behavior of the proposed kernel function provides the optimal regularization and also makes it suitable for signals corrupted with noise that includes many real world systems. Several experiments, mostly using benchmark datasets ranging from simple regression models to non-linear and high dimensional chaotic time series, have been conducted in order to compare the performance of the proposed technique with the results already published in the literature for other existing support vector kernels over a variety of settings including different noise levels, noise models, loss functions and SVM variations. The proposed kernel function improves the best known results by 18.6% and 24.4% on a benchmark dataset for two different experimental settings.


2012 ◽  
Vol 2012 (CICMT) ◽  
pp. 000621-000626
Author(s):  
Lei Xia ◽  
Ruimin Xu ◽  
Bo Yan

This paper presents a prior knowledge based support vector regression modeling method to characterize the RF performance of the low temperature co-fired ceramic (LTCC) structure. A coarse surrogate is formed by multidimensional Cauchy approximation as the prior knowledge to improve the accuracy of modeling. 3D LTCC based vertical interconnection model is developed as an example using the proposed method. Experimental results show that the developed SVR model perform a better predictive ability.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Kuaini Wang ◽  
Jingjing Zhang ◽  
Yanyan Chen ◽  
Ping Zhong

Least squares support vector machine (LS-SVM) is a powerful tool for pattern classification and regression estimation. However, LS-SVM is sensitive to large noises and outliers since it employs the squared loss function. To solve the problem, in this paper, we propose an absolute deviation loss function to reduce the effects of outliers and derive a robust regression model termed as least absolute deviation support vector regression (LAD-SVR). The proposed loss function is not differentiable. We approximate it by constructing a smooth function and develop a Newton algorithm to solve the robust model. Numerical experiments on both artificial datasets and benchmark datasets demonstrate the robustness and effectiveness of the proposed method.


2021 ◽  
Vol 23 (06) ◽  
pp. 1699-1715
Author(s):  
Mohamed, A. M. ◽  
◽  
Abdel Latif, S. H ◽  
Alwan, A. S. ◽  
◽  
...  

The principle component analysis is used more frequently as a variables reduction technique. And recently, an evolving group of studies makes use of machine learning regression algorithms to improve the estimation of empirical models. One of the most frequently used machines learning regression models is support vector regression with various kernel functions. However, an ensemble of support vector regression and principal component analysis is also possible. So, this paper aims to investigate the competence of support vector regression techniques after performing principal component analysis to explore the possibility of reducing data and having more accurate estimations. Some new proposals are introduced and the behavior of two different models 𝜀𝜀-SVR and 𝑣𝑣-SVR are compared through an extensive simulation study under four different kernel functions; linear, radial, polynomial, and sigmoid kernel functions, with different sample sizes, ranges from small, moderate to large. The models are compared with their counterparts in terms of coefficient of determination (𝑅𝑅2 ) and root mean squared error (RMSE). The comparative results show that applying SVR after PCA models improve the results in terms of SV numbers between 30% and 60% on average and it can be applied with real data. In addition, the linear kernel function gave the best values rather than other kernel functions and the sigmoid kernel gave the worst values. Under 𝜀𝜀-SVR the results improved which did not happen with 𝑣𝑣-SVR. It is also drawn that, RMSE values decreased with increasing sample size.


Author(s):  
Manju Bala ◽  
R. K. Agrawal

The choice of kernel function and its parameter is very important for better performance of support vector machine. In this chapter, the authors proposed few new kernel functions which satisfy the Mercer’s conditions and a robust algorithm to automatically determine the suitable kernel function and its parameters based on AdaBoost to improve the performance of support vector machine. The performance of proposed algorithm is evaluated on several benchmark datasets from UCI repository. The experimental results for different datasets show that the Gaussian kernel is not always the best choice to achieve high generalization of support vector machine classifier. However, with the proper choice of kernel function and its parameters using proposed algorithm, it is possible to achieve maximum classification accuracy for all datasets.


Sign in / Sign up

Export Citation Format

Share Document