Application of integral operator for vector-valued regression learning

Author(s):  
Yuanxin Ma ◽  
Hongwei Sun

In this paper, the regression learning algorithm with vector-valued RKHS is studied. We motivate the need for extending learning theory of scalar-valued functions and analze the learning performance. In this setting, the output data are from a Hilbert space [Formula: see text], the associated RKHS consists of functions with values lie in [Formula: see text]. By providing mathematical aspects of vector-valued integral operator [Formula: see text], the capacity independent error bounds and learning rates are derived by means of the integral operator technique.

Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


Author(s):  
Qin Guo ◽  
Peixin Ye

We consider the coefficient-based least squares regularized regression learning algorithm for the strongly and uniformly mixing samples. We obtain the capacity independent error bounds of the algorithm by means of the integral operator techniques. A standard assumption in theoretical study of learning algorithms for regression is the uniform boundedness of output sample values. We abandon this boundedness assumption and carry out the error analysis with output sample values satisfying a generalized moment hypothesis.


Author(s):  
Meijian Zhang ◽  
Hongwei Sun

In this paper, we study the performance of kernel-based regression learning with non-iid sampling. The non-iid samples are drawn from different probability distributions with the same conditional distribution. A more general marginal distribution assumption is proposed. Under this assumption, the consistency of the regularization kernel network (RKN) and the coefficient regularization kernel network (CRKN) are proved. Satisfactory capacity independently error bounds and learning rates are derived by the techniques of integral operator.


Author(s):  
Baoqi Su ◽  
Hong-Wei Sun

Loss function is the key element of a learning algorithm. Based on the regression learning algorithm with an offset, the coefficient-based regularization network with variance loss is proposed. The variance loss is different from the usual least quare loss, hinge loss and pinball loss, it induces a kind of samples cross empirical risk. Also, our coefficient-based regularization only relies on general kernel, i.e. the kernel is required to possess continuity, boundedness and satisfy some mild differentiability condition. These two characteristics bring essential difficulties to the theoretical analysis of this learning scheme. By the hypothesis space strategy and the error decomposition technique in [L. Shi, Learning theory estimates for coefficient-based regularized regression, Appl. Comput. Harmon. Anal. 34 (2013) 252–265], a capacity-dependent error analysis is completed, satisfactory error bound and learning rates are then derived under a very mild regularity condition on the regression function. Also, we find an effective way to deal with the learning problem with samples cross empirical risk.


2011 ◽  
Vol 88 (7) ◽  
pp. 1471-1483 ◽  
Author(s):  
Yongquan Zhang ◽  
Feilong Cao ◽  
Zongben Xu

2017 ◽  
Vol 26 (2) ◽  
pp. 115-124
Author(s):  
Arzu Akgül

In the present paper, we introduce and investigate a new class of meromorphic functions associated with an integral operator, by using Hilbert space operator. For this class, we obtain coefficient inequality, extreme points, radius of close-to-convex, starlikeness and convexity, Hadamard product and integral means inequality.


1988 ◽  
Vol 31 (1) ◽  
pp. 70-78 ◽  
Author(s):  
Michael Cambern ◽  
Peter Greim

AbstractA well known result due to Dixmier and Grothendieck for spaces of continuous scalar-valued functions C(X), X compact Hausdorff, is that C(X) is a Banach dual if, and only if, Xis hyperstonean. Moreover, for hyperstonean X, the predual of C(X) is strongly unique. Here we obtain a formulation of this result for spaces of continuous vector-valued functions. It is shown that if E is a Hilbert space and C(X, (E, σ *) ) denotes the space of continuous functions on X to E when E is provided with its weak * ( = weak) topology, then C(X, (E, σ *) ) is a Banach dual if, and only if, X is hyperstonean. Moreover, for hyperstonean X, the predual of C(X, (E, σ *) ) is strongly unique.


Sign in / Sign up

Export Citation Format

Share Document