regularization error
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 1)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Shouyin Chen ◽  
Na Chen

In this paper, we propose a learning scheme for regression generated by atomic norm regularization and data independent hypothesis spaces. The hypothesis spaces based on polynomial kernels are trained from finite datasets, which is independent of the given sample. We also present an error analysis for the proposed atomic norm regularization algorithm with polynomial kernels. When dealing with algorithms with polynomial kernels, the regularization error is a main difficulty. We estimate the regularization error by local polynomial reproduction formula. Better error estimates are derived by applying projection and iteration techniques. Our study shows that the proposed algorithm has a fast convergence rate with O(mζ-1), which is the best convergence rate in the literature.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Cheng Wang ◽  
Weilin Nie

We introduce a constructive approach for the least squares algorithms with generalizedK-norm regularization. Different from the previous studies, a stepping-stone function is constructed with some adjustable parameters in error decomposition. It makes the analysis flexible and may be extended to other algorithms. Based on projection technique for sample error and spectral theorem for integral operator in regularization error, we finally derive a learning rate.


2014 ◽  
Vol 15 (1) ◽  
pp. 126-152 ◽  
Author(s):  
Hoang-Ngan Nguyen ◽  
Ricardo Cortez

AbstractWe focus on the problem of evaluating the velocity field outside a solid object moving in an incompressible Stokes flow using the boundary integral formulation. For points near the boundary, the integral is nearly singular, and accurate computation of the velocity is not routine. One way to overcome this problem is to regularize the integral kernel. The method of regularized Stokeslet (MRS) is a systematic way to regularize the kernel in this situation. For a specific blob function which is widely used, the error of the MRS is only of first order with respect to the blob parameter. We prove that this is the case for radial blob functions with decay property ϕ(r)=O(r−3−α) when r→∞ for some constant α>1. We then find a class of blob functions for which the leading local error term can be removed to get second and third order errors with respect to blob parameter. Since the addition of these terms might give a flow field that is not divergence free, we introduce a modification of these terms to make the divergence of the corrected flow field close to zero while keeping the desired accuracy. Furthermore, these dominant terms are explicitly expressed in terms of blob function and so the computation time is negligible.


2011 ◽  
Vol 23 (10) ◽  
pp. 2713-2729 ◽  
Author(s):  
Guohui Song ◽  
Haizhang Zhang

A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error, and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the ℓ1 norm can be applied to improve the learning rate estimate of ℓ1-regularization in machine learning.


2011 ◽  
Vol 27 (2) ◽  
pp. 025006 ◽  
Author(s):  
Jens Flemming ◽  
Bernd Hofmann ◽  
Peter Mathé

2009 ◽  
Vol 25 (11) ◽  
pp. 115022 ◽  
Author(s):  
Bangti Jin ◽  
Dirk A Lorenz ◽  
Stefan Schiffler

Sign in / Sign up

Export Citation Format

Share Document