An effective real time update rule for improving performances both the classification and regression problems in kernel methods

Author(s):  
Eun-Mi Kim ◽  
Bae-Ho Lee
2003 ◽  
Vol 15 (6) ◽  
pp. 1397-1437 ◽  
Author(s):  
Tong Zhang

In this article, we study leave-one-out style cross-validation bounds for kernel methods. The essential element in our analysis is a bound on the parameter estimation stability for regularized kernel formulations. Using this result, we derive bounds on expected leave-one-out cross-validation errors, which lead to expected generalization bounds for various kernel algorithms. In addition, we also obtain variance bounds for leave-oneout errors. We apply our analysis to some classification and regression problems and compare them with previous results.


Actuators ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 132
Author(s):  
Siyu Gao ◽  
Yanjun Wei ◽  
Di Zhang ◽  
Hanhong Qi ◽  
Yao Wei

Model predictive torque control with duty cycle control (MPTC-DCC) is widely used in motor drive systems because of its low torque ripple and good steady-state performance. However, the selection of the optimal voltage vector and the calculation of the duration are extremely dependent on the accuracy of the motor parameters. In view of this situation, A modified MPTC-DCC is proposed in this paper. According to the variation of error between the measured value and the predicted value, the motor parameters are calculated in real-time. Meanwhile, Model reference adaptive control (MRAC) is adopted in the speed loop to eliminate the disturbance caused by the ripple of real-time update parameters, through which the disturbance caused by parameter mismatch is suppressed effectively. The simulation and experiment are carried out on MATLAB / Simulink software and dSPACE experimental platform, which corroborate the principle analysis and the correctness of the method.


1999 ◽  
Vol 11 (2) ◽  
pp. 483-497 ◽  
Author(s):  
Ran Avnimelech ◽  
Nathan Intrator

We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hinton, 1991), applied to classification, or a variant of the boosting algorithm (Schapire, 1990). As a variant of the mixture of experts, it can be made appropriate for general classification and regression problems by initializing the partition of the data set to different experts in a boostlike manner. If viewed as a variant of the boosting algorithm, its main gain is the use of a dynamic combination model for the outputs of the networks. Results are demonstrated on a synthetic example and a digit recognition task from the NIST database and compared with classical ensemble approaches.


2009 ◽  
Vol 21 (7) ◽  
pp. 2082-2103 ◽  
Author(s):  
Shirish Shevade ◽  
S. Sundararajan

Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better or comparable generalization performance over existing methods.


2020 ◽  
pp. 214-244
Author(s):  
Prithish Banerjee ◽  
Mark Vere Culp ◽  
Kenneth Jospeh Ryan ◽  
George Michailidis

This chapter presents some popular graph-based semi-supervised approaches. These techniques apply to classification and regression problems and can be extended to big data problems using recently developed anchor graph enhancements. The background necessary for understanding this Chapter includes linear algebra and optimization. No prior knowledge in methods of machine learning is necessary. An empirical demonstration of the techniques for these methods is also provided on real data set benchmarks.


Author(s):  
M. Frydman ◽  
J. Palacio ◽  
D. Lee ◽  
G. Pidcock ◽  
R. Delgado ◽  
...  
Keyword(s):  
The Real ◽  

Sign in / Sign up

Export Citation Format

Share Document