ℓ1 Regularization in Infinite Dimensional Feature Spaces

Author(s):  
Saharon Rosset ◽  
Grzegorz Swirszcz ◽  
Nathan Srebro ◽  
Ji Zhu
2001 ◽  
Vol 13 (3) ◽  
pp. 505-510 ◽  
Author(s):  
Roman Rosipal ◽  
Mark Girolami

The proposal of considering nonlinear principal component analysis as a kernel eigenvalue problem has provided an extremely powerful method of extracting nonlinear features for a number of classification and regression applications. Whereas the utilization of Mercer kernels makes the problem of computing principal components in, possibly, infinite-dimensional feature spaces tractable, there are still the attendant numerical problems of diagonalizing large matrices. In this contribution, we propose an expectation-maximization approach for performing kernel principal component analysis and show this to be a computationally efficient method, especially when the number of data points is large.


2005 ◽  
Vol 17 (9) ◽  
pp. 2077-2098 ◽  
Author(s):  
Tong Zhang

Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the “curse-of-dimensionality” phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.


2015 ◽  
Vol 15 (3) ◽  
pp. 279-289 ◽  
Author(s):  
Jens Flemming ◽  
Bernd Hofmann ◽  
Ivan Veselić

AbstractBased on the powerful tool of variational inequalities, in recent papers convergence rates results on ℓ1-regularization for ill-posed inverse problems have been formulated in infinite dimensional spaces under the condition that the sparsity assumption slightly fails, but the solution is still in ℓ1. In the present paper, we improve those convergence rates results and apply them to the Cesáro operator equation in ℓ2 and to specific denoising problems. Moreover, we formulate in this context relationships between Nashed's types of ill-posedness and mapping properties like compactness and strict singularity.


Author(s):  
Charalambos D. Aliprantis ◽  
Kim C. Border

2012 ◽  
Vol 57 (3) ◽  
pp. 829-835 ◽  
Author(s):  
Z. Głowacz ◽  
J. Kozik

The paper describes a procedure for automatic selection of symptoms accompanying the break in the synchronous motor armature winding coils. This procedure, called the feature selection, leads to choosing from a full set of features describing the problem, such a subset that would allow the best distinguishing between healthy and damaged states. As the features the spectra components amplitudes of the motor current signals were used. The full spectra of current signals are considered as the multidimensional feature spaces and their subspaces are tested. Particular subspaces are chosen with the aid of genetic algorithm and their goodness is tested using Mahalanobis distance measure. The algorithm searches for such a subspaces for which this distance is the greatest. The algorithm is very efficient and, as it was confirmed by research, leads to good results. The proposed technique is successfully applied in many other fields of science and technology, including medical diagnostics.


Sign in / Sign up

Export Citation Format

Share Document