scholarly journals Eigenvalue Problem for Discrete Jacobi–Sobolev Orthogonal Polynomials

Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 182
Author(s):  
Juan F. Mañas-Mañas ◽  
Juan J. Moreno-Balcázar ◽  
Richard Wellman

In this paper, we consider a discrete Sobolev inner product involving the Jacobi weight with a twofold objective. On the one hand, since the orthonormal polynomials with respect to this inner product are eigenfunctions of a certain differential operator, we are interested in the corresponding eigenvalues, more exactly, in their asymptotic behavior. Thus, we can determine a limit value which links this asymptotic behavior and the uniform norm of the orthonormal polynomials in a logarithmic scale. This value appears in the theory of reproducing kernel Hilbert spaces. On the other hand, we tackle a more general case than the one considered in the literature previously.

1995 ◽  
Vol 7 (6) ◽  
pp. 1225-1244 ◽  
Author(s):  
Valentina Corradi ◽  
Halbert White

In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. We show that in the case of output data observed with noise, regularized networks are capable of learning and approximating (on compacta) elements of certain classes of Sobolev spaces, known as reproducing kernel Hilbert spaces (RKHS), at a nonparametric rate that optimally exploits the smoothness properties of the unknown mapping. In particular we show that the total squared error, given by the sum of the squared bias and the variance, will approach zero at a rate of n(-2m)/(2m+1), where m denotes the order of differentiability of the true unknown function. On the other hand, if the unknown mapping is a continuous function but does not belong to an RKHS, then there still exists a unique regularized solution, but this is no longer guaranteed to converge in mean square to a well-defined limit. Further, even if such a solution converges, the total squared error is bounded away from zero for all n sufficiently large.


2016 ◽  
Vol 15 (01) ◽  
pp. 123-135 ◽  
Author(s):  
Palle Jorgensen ◽  
Feng Tian

A frame is a system of vectors [Formula: see text] in Hilbert space [Formula: see text] with properties which allow one to write algorithms for the two operations, analysis and synthesis, relative to [Formula: see text], for all vectors in [Formula: see text]; expressed in norm-convergent series. Traditionally, frame properties are expressed in terms of an [Formula: see text]-Gramian, [Formula: see text] (an infinite matrix with entries equal to the inner product of pairs of vectors in [Formula: see text]); but still with strong restrictions on the given system of vectors in [Formula: see text], in order to guarantee frame-bounds. In this paper, we remove these restrictions on [Formula: see text], and we obtain instead direct-integral analysis/synthesis formulas. Applications are given to reproducing kernel Hilbert spaces, and to random fields.


2021 ◽  
Vol 14 (2) ◽  
pp. 201-214
Author(s):  
Danilo Croce ◽  
Giuseppe Castellucci ◽  
Roberto Basili

In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Raffaela Capitanelli ◽  
Maria Agostina Vivaldi

AbstractIn this paper, we study asymptotic behavior of solutions to obstacle problems for p-Laplacians as {p\to\infty}. For the one-dimensional case and for the radial case, we give an explicit expression of the limit. In the n-dimensional case, we provide sufficient conditions to assure the uniform convergence of the whole family of the solutions of obstacle problems either for data f that change sign in Ω or for data f (that do not change sign in Ω) possibly vanishing in a set of positive measure.


2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


2014 ◽  
Vol 9 (4) ◽  
pp. 827-931 ◽  
Author(s):  
Joseph A. Ball ◽  
Dmitry S. Kaliuzhnyi-Verbovetskyi ◽  
Cora Sadosky ◽  
Victor Vinnikov

Sign in / Sign up

Export Citation Format

Share Document