Online minimum error entropy algorithm with unbounded sampling

2019 ◽  
Vol 17 (02) ◽  
pp. 293-322 ◽  
Author(s):  
Cheng Wang ◽  
Ting Hu

Minimum error entropy (MEE) criterion is an important optimization method in information theoretic learning (ITL) and has been widely used and studied in various practical scenarios. In this paper, we shall introduce the online MEE algorithm for dealing with big datasets, associated with reproducing kernel Hilbert spaces (RKHS) and unbounded sampling processes. Explicit convergence rate will be given under the conditions of regularity of the regression function and polynomially decaying step sizes. Besides its low complexity, we will also show that the learning ability of online MEE is superior to the previous work in the literature. Our main techniques depend on integral operators on RKHS and probability inequalities for random variables with values in a Hilbert space.

2015 ◽  
Vol 13 (04) ◽  
pp. 437-455 ◽  
Author(s):  
Ting Hu ◽  
Jun Fan ◽  
Qiang Wu ◽  
Ding-Xuan Zhou

We introduce a learning algorithm for regression generated by a minimum error entropy (MEE) principle and regularization schemes in reproducing kernel Hilbert spaces. This empirical MEE algorithm is highly related to a scaling parameter arising from Parzen windowing. The purpose of this paper is to carry out consistency analysis when the scaling parameter is large. Explicit learning rates are provided. Novel approaches are proposed to overcome the difficulties in bounding the output function uniformly and in the special MEE feature that the regression function may not be a minimizer of the error entropy.


2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


2014 ◽  
Vol 9 (4) ◽  
pp. 827-931 ◽  
Author(s):  
Joseph A. Ball ◽  
Dmitry S. Kaliuzhnyi-Verbovetskyi ◽  
Cora Sadosky ◽  
Victor Vinnikov

2009 ◽  
Vol 80 (3) ◽  
pp. 430-453 ◽  
Author(s):  
JOSEF DICK

AbstractWe give upper bounds on the Walsh coefficients of functions for which the derivative of order at least one has bounded variation of fractional order. Further, we also consider the Walsh coefficients of functions in periodic and nonperiodic reproducing kernel Hilbert spaces. A lower bound which shows that our results are best possible is also shown.


2017 ◽  
Vol 87 (2) ◽  
pp. 225-244 ◽  
Author(s):  
Rani Kumari ◽  
Jaydeb Sarkar ◽  
Srijan Sarkar ◽  
Dan Timotin

Sign in / Sign up

Export Citation Format

Share Document