scholarly journals Generalized linear models for geometrical current predictors: An application to predict garment fit

2019 ◽  
Vol 20 (6) ◽  
pp. 562-591
Author(s):  
Sonia Barahona ◽  
Pablo Centella ◽  
Ximo Gual-Arnau ◽  
M. Victoria Ibáñez ◽  
Amelia Simó

The aim of this article is to model an ordinal response variable in terms of vector-valued functional data included on a vector-valued reproducing kernel Hilbert space (RKHS). In particular, we focus on the vector-valued RKHS obtained when a geometrical object (body) is characterized by a current and on the ordinal regression model. A common way to solve this problem in functional data analysis is to express the data in the orthonormal basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one hand, they are vector-valued functions, and on the other, they are functions in an RKHS with a previously defined norm. We propose to use three different bases: the orthonormal basis given by the kernel that defines the RKHS, a basis obtained from decomposition of the integral operator defined using the covariance function and a third basis that combines the previous two. The three approaches are compared and applied to an interesting problem: building a model to predict the fit of children's garment sizes, based on a 3D database of the Spanish child population. Our proposal has been compared with alternative methods that explore the performance of other classifiers (Support Vector Machine and [Formula: see text]-NN), and with the result of applying the classification method proposed in this work, from different characterizations of the objects (landmarks and multivariate anthropometric measurements instead of currents), obtaining in all these cases worst results.

2005 ◽  
Vol 17 (1) ◽  
pp. 177-204 ◽  
Author(s):  
Charles A. Micchelli ◽  
Massimiliano Pontil

In this letter, we provide a study of learning in a Hilbert space of vector-valued functions. We motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications. Specifically, we allow an output space Y to be a Hilbert space, and we consider a reproducing kernel Hilbert space of functions whose values lie in Y. In this setting, we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory. We consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines, for both regression and classification. Finally, we provide classes of operator-valued kernels of the dot product and translation-invariant type.


2009 ◽  
Vol 2009 ◽  
pp. 1-9 ◽  
Author(s):  
Manuel Martín-Merino ◽  
Ángela Blanco ◽  
Javier De Las Rivas

DNA microarrays provide rich profiles that are used in cancer prediction considering the gene expression levels across a collection of related samples. Support Vector Machines (SVM) have been applied to the classification of cancer samples with encouraging results. However, they rely on Euclidean distances that fail to reflect accurately the proximities among sample profiles. Then, non-Euclidean dissimilarities provide additional information that should be considered to reduce the misclassification errors. In this paper, we incorporate in the -SVM algorithm a linear combination of non-Euclidean dissimilarities. The weights of the combination are learnt in a (Hyper Reproducing Kernel Hilbert Space) HRKHS using a Semidefinite Programming algorithm. This approach allows us to incorporate a smoothing term that penalizes the complexity of the family of distances and avoids overfitting. The experimental results suggest that the method proposed helps to reduce the misclassification errors in several human cancer problems.


2017 ◽  
Vol 17 (15&16) ◽  
pp. 1292-1306 ◽  
Author(s):  
Rupak Chatterjee ◽  
Ting Yu

The support vector machine (SVM) is a popular machine learning classification method which produces a nonlinear decision boundary in a feature space by constructing linear boundaries in a transformed Hilbert space. It is well known that these algorithms when executed on a classical computer do not scale well with the size of the feature space both in terms of data points and dimensionality. One of the most significant limitations of classical algorithms using non-linear kernels is that the kernel function has to be evaluated for all pairs of input feature vectors which themselves may be of substantially high dimension. This can lead to computationally excessive times during training and during the prediction process for a new data point. Here, we propose using both canonical and generalized coherent states to calculate specific nonlinear kernel functions. The key link will be the reproducing kernel Hilbert space (RKHS) property for SVMs that naturally arise from canonical and generalized coherent states. Specifically, we discuss the evaluation of radial kernels through a positive operator valued measure (POVM) on a quantum optical system based on canonical coherent states. A similar procedure may also lead to calculations of kernels not usually used in classical algorithms such as those arising from generalized coherent states.


Author(s):  
YONG-LI XU ◽  
DI-RONG CHEN

The study of regularized learning algorithms is a very important issue and functional data analysis extends classical methods. We establish the learning rates of the least square regularized regression algorithm in reproducing kernel Hilbert space for functional data. With the iteration method, we obtain fast learning rate for functional data. Our result is a natural extension for least square regularized regression algorithm when the dimension of input data is finite.


Author(s):  
Y. Mo ◽  
T. Qian ◽  
W. Mi

This paper discusses generalization bounds for complex data learning which serve as a theoretical foundation for complex support vector machine (SVM). Drawn on the generalization bounds, a complex SVM approach based on the Szegő kernel of the Hardy space H2(𝔻) is formulated. It is applied to the frequency-domain identification problem of discrete linear time-invariant system (LTIS). Experiments show that the proposed algorithm is effective in applications.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4443
Author(s):  
Cristian Kaori Valencia-Marin ◽  
Juan Diego Pulgarin-Giraldo ◽  
Luisa Fernanda Velasquez-Martinez ◽  
Andres Marino Alvarez-Meza ◽  
German Castellanos-Dominguez

Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class).


Author(s):  
Irina Holmes ◽  
Ambar N. Sengupta

There has been growing recent interest in probabilistic interpretations of kernel-based methods as well as learning in Banach spaces. The absence of a useful Lebesgue measure on an infinite-dimensional reproducing kernel Hilbert space is a serious obstacle for such stochastic models. We propose an estimation model for the ridge regression problem within the framework of abstract Wiener spaces and show how the support vector machine solution to such problems can be interpreted in terms of the Gaussian Radon transform.


Author(s):  
Heng Chen ◽  
Wei Huang ◽  
Di-Rong Chen

Sliced inverse regression (SIR) is a powerful method to deal with a dimension reduction model. As well known, SIR is equivalent to a transformation-based projection pursuit, where the optimal directions are just the directions in SIR. In this paper, we consider simultaneous estimations of optimal directions for functional data and optimal transformations. We take a reproducing kernel Hilbert space approach. Both the directions and the transformations are chosen from reproducing kernel Hilbert spaces. A learning rate is established for the estimators.


Author(s):  
Seiichi Ikeda ◽  
◽  
Yoshiharu Sato

We have shown that models support vector regression and classification are essentially linear in reproducing kernel Hilbert space (RKHS). To overcome the over fitting problem, a regularization term is added to the optimization process, deciding the coefficient of regularization term involves difficulties. We introduce the variable selection concept to the linear model in RKHS, where the kernel functions is treated as variable transformation when its value is given by observation. We show that kernel canonical discriminant functions for multiclass problems can be discussed under variable selection, which enables us to reduce the number of kernel functions in the discriminant function, i.e., the discriminant function is obtained as linear combinations of sufficiently small numbers of kernel functions, so, we can expect to get reasonable prediction. We discuss variable selection performance in canonical discriminant functions compared to support vector machines.


Sign in / Sign up

Export Citation Format

Share Document